2025-05-19 21:09:32.269512 | Job console starting 2025-05-19 21:09:32.283610 | Updating git repos 2025-05-19 21:09:32.359260 | Cloning repos into workspace 2025-05-19 21:09:32.536452 | Restoring repo states 2025-05-19 21:09:32.556846 | Merging changes 2025-05-19 21:09:32.557018 | Checking out repos 2025-05-19 21:09:32.849115 | Preparing playbooks 2025-05-19 21:09:33.481686 | Running Ansible setup 2025-05-19 21:09:37.848133 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-19 21:09:38.696311 | 2025-05-19 21:09:38.696489 | PLAY [Base pre] 2025-05-19 21:09:38.713915 | 2025-05-19 21:09:38.714078 | TASK [Setup log path fact] 2025-05-19 21:09:38.744306 | orchestrator | ok 2025-05-19 21:09:38.761803 | 2025-05-19 21:09:38.761989 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 21:09:38.804263 | orchestrator | ok 2025-05-19 21:09:38.817137 | 2025-05-19 21:09:38.817265 | TASK [emit-job-header : Print job information] 2025-05-19 21:09:38.873791 | # Job Information 2025-05-19 21:09:38.874126 | Ansible Version: 2.16.14 2025-05-19 21:09:38.874188 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-19 21:09:38.874248 | Pipeline: post 2025-05-19 21:09:38.874288 | Executor: 521e9411259a 2025-05-19 21:09:38.874326 | Triggered by: https://github.com/osism/testbed/commit/94d371d20ac789cea6d39f01979b8c56bd2f4276 2025-05-19 21:09:38.874364 | Event ID: e80e056e-34dd-11f0-9af2-84ea3bd1eb53 2025-05-19 21:09:38.884873 | 2025-05-19 21:09:38.885090 | LOOP [emit-job-header : Print node information] 2025-05-19 21:09:39.001359 | orchestrator | ok: 2025-05-19 21:09:39.001589 | orchestrator | # Node Information 2025-05-19 21:09:39.001625 | orchestrator | Inventory Hostname: orchestrator 2025-05-19 21:09:39.001651 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-19 21:09:39.001674 | orchestrator | Username: zuul-testbed05 2025-05-19 21:09:39.001697 | orchestrator | Distro: Debian 12.11 2025-05-19 21:09:39.001721 | orchestrator | Provider: static-testbed 2025-05-19 21:09:39.001746 | orchestrator | Region: 2025-05-19 21:09:39.001775 | orchestrator | Label: testbed-orchestrator 2025-05-19 21:09:39.001796 | orchestrator | Product Name: OpenStack Nova 2025-05-19 21:09:39.001815 | orchestrator | Interface IP: 81.163.193.140 2025-05-19 21:09:39.035489 | 2025-05-19 21:09:39.035733 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-19 21:09:39.531858 | orchestrator -> localhost | changed 2025-05-19 21:09:39.551819 | 2025-05-19 21:09:39.552046 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-19 21:09:40.680664 | orchestrator -> localhost | changed 2025-05-19 21:09:40.705232 | 2025-05-19 21:09:40.705390 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-19 21:09:41.014994 | orchestrator -> localhost | ok 2025-05-19 21:09:41.032875 | 2025-05-19 21:09:41.033142 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-19 21:09:41.075766 | orchestrator | ok 2025-05-19 21:09:41.100068 | orchestrator | included: /var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-19 21:09:41.109079 | 2025-05-19 21:09:41.109217 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-19 21:09:42.913503 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-19 21:09:42.914237 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/fee901b6ba114e6d9b855d30c91c5e56_id_rsa 2025-05-19 21:09:42.914427 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/fee901b6ba114e6d9b855d30c91c5e56_id_rsa.pub 2025-05-19 21:09:42.914558 | orchestrator -> localhost | The key fingerprint is: 2025-05-19 21:09:42.914671 | orchestrator -> localhost | SHA256:Cms7O1bwb3lKAjxiTM7Yr43nvQk0feOaeFLu6bn8XHo zuul-build-sshkey 2025-05-19 21:09:42.914787 | orchestrator -> localhost | The key's randomart image is: 2025-05-19 21:09:42.915048 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-19 21:09:42.915165 | orchestrator -> localhost | | | 2025-05-19 21:09:42.915265 | orchestrator -> localhost | | | 2025-05-19 21:09:42.915359 | orchestrator -> localhost | | . | 2025-05-19 21:09:42.915477 | orchestrator -> localhost | | B ... | 2025-05-19 21:09:42.915568 | orchestrator -> localhost | |. B Bo. S | 2025-05-19 21:09:42.915668 | orchestrator -> localhost | | . + *++ . | 2025-05-19 21:09:42.915759 | orchestrator -> localhost | | =+o.o.. | 2025-05-19 21:09:42.915850 | orchestrator -> localhost | | =**oX+oE | 2025-05-19 21:09:42.916033 | orchestrator -> localhost | | o+*O&==+ | 2025-05-19 21:09:42.916131 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-19 21:09:42.916351 | orchestrator -> localhost | ok: Runtime: 0:00:01.276115 2025-05-19 21:09:42.938579 | 2025-05-19 21:09:42.938778 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-19 21:09:42.972579 | orchestrator | ok 2025-05-19 21:09:42.990324 | orchestrator | included: /var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-19 21:09:43.004144 | 2025-05-19 21:09:43.004279 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-19 21:09:43.030236 | orchestrator | skipping: Conditional result was False 2025-05-19 21:09:43.040973 | 2025-05-19 21:09:43.041149 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-19 21:09:43.838409 | orchestrator | changed 2025-05-19 21:09:43.845566 | 2025-05-19 21:09:43.845690 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-19 21:09:44.140777 | orchestrator | ok 2025-05-19 21:09:44.151139 | 2025-05-19 21:09:44.151305 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-19 21:09:44.609274 | orchestrator | ok 2025-05-19 21:09:44.619448 | 2025-05-19 21:09:44.619607 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-19 21:09:45.043765 | orchestrator | ok 2025-05-19 21:09:45.052823 | 2025-05-19 21:09:45.052999 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-19 21:09:45.079483 | orchestrator | skipping: Conditional result was False 2025-05-19 21:09:45.087781 | 2025-05-19 21:09:45.087897 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-19 21:09:45.570417 | orchestrator -> localhost | changed 2025-05-19 21:09:45.591934 | 2025-05-19 21:09:45.592077 | TASK [add-build-sshkey : Add back temp key] 2025-05-19 21:09:45.961985 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/fee901b6ba114e6d9b855d30c91c5e56_id_rsa (zuul-build-sshkey) 2025-05-19 21:09:45.962354 | orchestrator -> localhost | ok: Runtime: 0:00:00.019741 2025-05-19 21:09:45.970666 | 2025-05-19 21:09:45.970781 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-19 21:09:46.414580 | orchestrator | ok 2025-05-19 21:09:46.422930 | 2025-05-19 21:09:46.423066 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-19 21:09:46.458471 | orchestrator | skipping: Conditional result was False 2025-05-19 21:09:46.525352 | 2025-05-19 21:09:46.525503 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-19 21:09:46.931725 | orchestrator | ok 2025-05-19 21:09:46.953886 | 2025-05-19 21:09:46.954134 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-19 21:09:46.997033 | orchestrator | ok 2025-05-19 21:09:47.005334 | 2025-05-19 21:09:47.005479 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-19 21:09:47.316243 | orchestrator -> localhost | ok 2025-05-19 21:09:47.324251 | 2025-05-19 21:09:47.324372 | TASK [validate-host : Collect information about the host] 2025-05-19 21:09:48.552625 | orchestrator | ok 2025-05-19 21:09:48.568497 | 2025-05-19 21:09:48.568636 | TASK [validate-host : Sanitize hostname] 2025-05-19 21:09:48.635634 | orchestrator | ok 2025-05-19 21:09:48.644430 | 2025-05-19 21:09:48.644557 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-19 21:09:49.239308 | orchestrator -> localhost | changed 2025-05-19 21:09:49.253611 | 2025-05-19 21:09:49.253782 | TASK [validate-host : Collect information about zuul worker] 2025-05-19 21:09:49.720356 | orchestrator | ok 2025-05-19 21:09:49.730560 | 2025-05-19 21:09:49.730748 | TASK [validate-host : Write out all zuul information for each host] 2025-05-19 21:09:50.326150 | orchestrator -> localhost | changed 2025-05-19 21:09:50.348576 | 2025-05-19 21:09:50.348737 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-19 21:09:50.640523 | orchestrator | ok 2025-05-19 21:09:50.651209 | 2025-05-19 21:09:50.651395 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-19 21:10:05.880250 | orchestrator | changed: 2025-05-19 21:10:05.880593 | orchestrator | .d..t...... src/ 2025-05-19 21:10:05.880652 | orchestrator | .d..t...... src/github.com/ 2025-05-19 21:10:05.880694 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-19 21:10:05.880731 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-19 21:10:05.880765 | orchestrator | RedHat.yml 2025-05-19 21:10:05.893520 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-19 21:10:05.893538 | orchestrator | RedHat.yml 2025-05-19 21:10:05.893591 | orchestrator | = 1.53.0"... 2025-05-19 21:10:18.128035 | orchestrator | 21:10:18.127 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-19 21:10:18.208651 | orchestrator | 21:10:18.208 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-19 21:10:19.579579 | orchestrator | 21:10:19.579 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.0.0... 2025-05-19 21:10:21.636464 | orchestrator | 21:10:21.636 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.0.0 (signed, key ID 4F80527A391BEFD2) 2025-05-19 21:10:22.908916 | orchestrator | 21:10:22.908 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-19 21:10:23.962367 | orchestrator | 21:10:23.962 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 21:10:25.178557 | orchestrator | 21:10:25.178 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-19 21:10:26.233813 | orchestrator | 21:10:26.233 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-19 21:10:26.233975 | orchestrator | 21:10:26.233 STDOUT terraform: Providers are signed by their developers. 2025-05-19 21:10:26.233996 | orchestrator | 21:10:26.233 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-19 21:10:26.234198 | orchestrator | 21:10:26.234 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-19 21:10:26.234308 | orchestrator | 21:10:26.234 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-19 21:10:26.234441 | orchestrator | 21:10:26.234 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-19 21:10:26.234595 | orchestrator | 21:10:26.234 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-19 21:10:26.234615 | orchestrator | 21:10:26.234 STDOUT terraform: you run "tofu init" in the future. 2025-05-19 21:10:26.234783 | orchestrator | 21:10:26.234 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-19 21:10:26.234896 | orchestrator | 21:10:26.234 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-19 21:10:26.235021 | orchestrator | 21:10:26.234 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-19 21:10:26.235050 | orchestrator | 21:10:26.235 STDOUT terraform: should now work. 2025-05-19 21:10:26.235214 | orchestrator | 21:10:26.235 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-19 21:10:26.235332 | orchestrator | 21:10:26.235 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-19 21:10:26.235453 | orchestrator | 21:10:26.235 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-19 21:10:26.404449 | orchestrator | 21:10:26.404 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-19 21:10:26.609949 | orchestrator | 21:10:26.609 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-19 21:10:26.610130 | orchestrator | 21:10:26.609 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-19 21:10:26.610163 | orchestrator | 21:10:26.609 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-19 21:10:26.610176 | orchestrator | 21:10:26.609 STDOUT terraform: for this configuration. 2025-05-19 21:10:26.850348 | orchestrator | 21:10:26.850 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-19 21:10:26.964677 | orchestrator | 21:10:26.964 STDOUT terraform: ci.auto.tfvars 2025-05-19 21:10:26.971635 | orchestrator | 21:10:26.971 STDOUT terraform: default_custom.tf 2025-05-19 21:10:27.201221 | orchestrator | 21:10:27.200 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-05-19 21:10:28.207177 | orchestrator | 21:10:28.206 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-19 21:10:28.724918 | orchestrator | 21:10:28.724 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-19 21:10:28.919768 | orchestrator | 21:10:28.919 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-19 21:10:28.919903 | orchestrator | 21:10:28.919 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-19 21:10:28.919915 | orchestrator | 21:10:28.919 STDOUT terraform:  + create 2025-05-19 21:10:28.919923 | orchestrator | 21:10:28.919 STDOUT terraform:  <= read (data resources) 2025-05-19 21:10:28.919931 | orchestrator | 21:10:28.919 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-19 21:10:28.919942 | orchestrator | 21:10:28.919 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-19 21:10:28.919965 | orchestrator | 21:10:28.919 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 21:10:28.920000 | orchestrator | 21:10:28.919 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-19 21:10:28.920036 | orchestrator | 21:10:28.919 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 21:10:28.920075 | orchestrator | 21:10:28.920 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 21:10:28.920110 | orchestrator | 21:10:28.920 STDOUT terraform:  + file = (known after apply) 2025-05-19 21:10:28.920145 | orchestrator | 21:10:28.920 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.920184 | orchestrator | 21:10:28.920 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.920218 | orchestrator | 21:10:28.920 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 21:10:28.920253 | orchestrator | 21:10:28.920 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 21:10:28.920279 | orchestrator | 21:10:28.920 STDOUT terraform:  + most_recent = true 2025-05-19 21:10:28.920315 | orchestrator | 21:10:28.920 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.920349 | orchestrator | 21:10:28.920 STDOUT terraform:  + protected = (known after apply) 2025-05-19 21:10:28.920385 | orchestrator | 21:10:28.920 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.920419 | orchestrator | 21:10:28.920 STDOUT terraform:  + schema = (known after apply) 2025-05-19 21:10:28.920459 | orchestrator | 21:10:28.920 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 21:10:28.920493 | orchestrator | 21:10:28.920 STDOUT terraform:  + tags = (known after apply) 2025-05-19 21:10:28.920527 | orchestrator | 21:10:28.920 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 21:10:28.920545 | orchestrator | 21:10:28.920 STDOUT terraform:  } 2025-05-19 21:10:28.920612 | orchestrator | 21:10:28.920 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-19 21:10:28.920647 | orchestrator | 21:10:28.920 STDOUT terraform:  # (config refers to values not yet known) 2025-05-19 21:10:28.920693 | orchestrator | 21:10:28.920 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-19 21:10:28.920728 | orchestrator | 21:10:28.920 STDOUT terraform:  + checksum = (known after apply) 2025-05-19 21:10:28.920763 | orchestrator | 21:10:28.920 STDOUT terraform:  + created_at = (known after apply) 2025-05-19 21:10:28.920798 | orchestrator | 21:10:28.920 STDOUT terraform:  + file = (known after apply) 2025-05-19 21:10:28.920848 | orchestrator | 21:10:28.920 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.920903 | orchestrator | 21:10:28.920 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.920938 | orchestrator | 21:10:28.920 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-19 21:10:28.920974 | orchestrator | 21:10:28.920 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-19 21:10:28.921001 | orchestrator | 21:10:28.920 STDOUT terraform:  + most_recent = true 2025-05-19 21:10:28.921038 | orchestrator | 21:10:28.920 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.921074 | orchestrator | 21:10:28.921 STDOUT terraform:  + protected = (known after apply) 2025-05-19 21:10:28.921111 | orchestrator | 21:10:28.921 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.921148 | orchestrator | 21:10:28.921 STDOUT terraform:  + schema = (known after apply) 2025-05-19 21:10:28.921186 | orchestrator | 21:10:28.921 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-19 21:10:28.921217 | orchestrator | 21:10:28.921 STDOUT terraform:  + tags = (known after apply) 2025-05-19 21:10:28.921253 | orchestrator | 21:10:28.921 STDOUT terraform:  + updated_at = (known after apply) 2025-05-19 21:10:28.921262 | orchestrator | 21:10:28.921 STDOUT terraform:  } 2025-05-19 21:10:28.921303 | orchestrator | 21:10:28.921 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-19 21:10:28.921336 | orchestrator | 21:10:28.921 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-19 21:10:28.921381 | orchestrator | 21:10:28.921 STDOUT terraform:  + content = (known after apply) 2025-05-19 21:10:28.921426 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 21:10:28.921468 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 21:10:28.921521 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 21:10:28.921562 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 21:10:28.921610 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 21:10:28.921651 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 21:10:28.921679 | orchestrator | 21:10:28.921 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 21:10:28.921710 | orchestrator | 21:10:28.921 STDOUT terraform:  + file_permission = "0644" 2025-05-19 21:10:28.921754 | orchestrator | 21:10:28.921 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-19 21:10:28.921799 | orchestrator | 21:10:28.921 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.921807 | orchestrator | 21:10:28.921 STDOUT terraform:  } 2025-05-19 21:10:28.921865 | orchestrator | 21:10:28.921 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-19 21:10:28.921896 | orchestrator | 21:10:28.921 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-19 21:10:28.921940 | orchestrator | 21:10:28.921 STDOUT terraform:  + content = (known after apply) 2025-05-19 21:10:28.921984 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 21:10:28.922053 | orchestrator | 21:10:28.921 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 21:10:28.922095 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 21:10:28.922139 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 21:10:28.922183 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 21:10:28.922231 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 21:10:28.922264 | orchestrator | 21:10:28.922 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 21:10:28.922292 | orchestrator | 21:10:28.922 STDOUT terraform:  + file_permission = "0644" 2025-05-19 21:10:28.922328 | orchestrator | 21:10:28.922 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-19 21:10:28.922373 | orchestrator | 21:10:28.922 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.922382 | orchestrator | 21:10:28.922 STDOUT terraform:  } 2025-05-19 21:10:28.922436 | orchestrator | 21:10:28.922 STDOUT terraform:  # local_file.inventory will be created 2025-05-19 21:10:28.922466 | orchestrator | 21:10:28.922 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-19 21:10:28.922513 | orchestrator | 21:10:28.922 STDOUT terraform:  + content = (known after apply) 2025-05-19 21:10:28.922555 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 21:10:28.922600 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 21:10:28.922641 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 21:10:28.922690 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 21:10:28.922728 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 21:10:28.922771 | orchestrator | 21:10:28.922 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 21:10:28.922801 | orchestrator | 21:10:28.922 STDOUT terraform:  + directory_permission = "0777" 2025-05-19 21:10:28.922855 | orchestrator | 21:10:28.922 STDOUT terraform:  + file_permission = "0644" 2025-05-19 21:10:28.922896 | orchestrator | 21:10:28.922 STDOUT terraform:  + filename = "inventory.ci" 2025-05-19 21:10:28.922944 | orchestrator | 21:10:28.922 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.922953 | orchestrator | 21:10:28.922 STDOUT terraform:  } 2025-05-19 21:10:28.922992 | orchestrator | 21:10:28.922 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-19 21:10:28.923030 | orchestrator | 21:10:28.922 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-19 21:10:28.923068 | orchestrator | 21:10:28.923 STDOUT terraform:  + content = (sensitive value) 2025-05-19 21:10:28.923109 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-19 21:10:28.923154 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-19 21:10:28.923195 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-19 21:10:28.923238 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-19 21:10:28.923281 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-19 21:10:28.923326 | orchestrator | 21:10:28.923 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-19 21:10:28.923357 | orchestrator | 21:10:28.923 STDOUT terraform:  + directory_permission = "0700" 2025-05-19 21:10:28.923387 | orchestrator | 21:10:28.923 STDOUT terraform:  + file_permission = "0600" 2025-05-19 21:10:28.923425 | orchestrator | 21:10:28.923 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-19 21:10:28.923470 | orchestrator | 21:10:28.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.923479 | orchestrator | 21:10:28.923 STDOUT terraform:  } 2025-05-19 21:10:28.923519 | orchestrator | 21:10:28.923 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-19 21:10:28.923558 | orchestrator | 21:10:28.923 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-19 21:10:28.923584 | orchestrator | 21:10:28.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.923592 | orchestrator | 21:10:28.923 STDOUT terraform:  } 2025-05-19 21:10:28.923656 | orchestrator | 21:10:28.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-19 21:10:28.923713 | orchestrator | 21:10:28.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-19 21:10:28.923750 | orchestrator | 21:10:28.923 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.923776 | orchestrator | 21:10:28.923 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.923814 | orchestrator | 21:10:28.923 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.923883 | orchestrator | 21:10:28.923 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.923920 | orchestrator | 21:10:28.923 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.923964 | orchestrator | 21:10:28.923 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-19 21:10:28.924003 | orchestrator | 21:10:28.923 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.924023 | orchestrator | 21:10:28.923 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.924046 | orchestrator | 21:10:28.924 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.924054 | orchestrator | 21:10:28.924 STDOUT terraform:  } 2025-05-19 21:10:28.924109 | orchestrator | 21:10:28.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-19 21:10:28.924161 | orchestrator | 21:10:28.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.924195 | orchestrator | 21:10:28.924 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.924217 | orchestrator | 21:10:28.924 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.924252 | orchestrator | 21:10:28.924 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.924287 | orchestrator | 21:10:28.924 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.924320 | orchestrator | 21:10:28.924 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.924365 | orchestrator | 21:10:28.924 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-19 21:10:28.924398 | orchestrator | 21:10:28.924 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.924421 | orchestrator | 21:10:28.924 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.924445 | orchestrator | 21:10:28.924 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.924453 | orchestrator | 21:10:28.924 STDOUT terraform:  } 2025-05-19 21:10:28.924508 | orchestrator | 21:10:28.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-19 21:10:28.924560 | orchestrator | 21:10:28.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.924595 | orchestrator | 21:10:28.924 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.924618 | orchestrator | 21:10:28.924 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.924656 | orchestrator | 21:10:28.924 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.924686 | orchestrator | 21:10:28.924 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.924730 | orchestrator | 21:10:28.924 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.924775 | orchestrator | 21:10:28.924 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-19 21:10:28.924880 | orchestrator | 21:10:28.924 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.924908 | orchestrator | 21:10:28.924 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.924935 | orchestrator | 21:10:28.924 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.924943 | orchestrator | 21:10:28.924 STDOUT terraform:  } 2025-05-19 21:10:28.924998 | orchestrator | 21:10:28.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-19 21:10:28.925050 | orchestrator | 21:10:28.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.925086 | orchestrator | 21:10:28.925 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.925109 | orchestrator | 21:10:28.925 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.925143 | orchestrator | 21:10:28.925 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.925178 | orchestrator | 21:10:28.925 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.925211 | orchestrator | 21:10:28.925 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.925255 | orchestrator | 21:10:28.925 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-19 21:10:28.925291 | orchestrator | 21:10:28.925 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.925314 | orchestrator | 21:10:28.925 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.925338 | orchestrator | 21:10:28.925 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.925346 | orchestrator | 21:10:28.925 STDOUT terraform:  } 2025-05-19 21:10:28.925402 | orchestrator | 21:10:28.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-19 21:10:28.925454 | orchestrator | 21:10:28.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.925487 | orchestrator | 21:10:28.925 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.925510 | orchestrator | 21:10:28.925 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.925546 | orchestrator | 21:10:28.925 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.925580 | orchestrator | 21:10:28.925 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.925615 | orchestrator | 21:10:28.925 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.925658 | orchestrator | 21:10:28.925 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-19 21:10:28.925695 | orchestrator | 21:10:28.925 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.925718 | orchestrator | 21:10:28.925 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.925743 | orchestrator | 21:10:28.925 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.925752 | orchestrator | 21:10:28.925 STDOUT terraform:  } 2025-05-19 21:10:28.925808 | orchestrator | 21:10:28.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-19 21:10:28.925881 | orchestrator | 21:10:28.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.925903 | orchestrator | 21:10:28.925 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.925927 | orchestrator | 21:10:28.925 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.925961 | orchestrator | 21:10:28.925 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.925996 | orchestrator | 21:10:28.925 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.926047 | orchestrator | 21:10:28.925 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.926093 | orchestrator | 21:10:28.926 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-19 21:10:28.926130 | orchestrator | 21:10:28.926 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.926152 | orchestrator | 21:10:28.926 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.926177 | orchestrator | 21:10:28.926 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.926185 | orchestrator | 21:10:28.926 STDOUT terraform:  } 2025-05-19 21:10:28.926239 | orchestrator | 21:10:28.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-19 21:10:28.926291 | orchestrator | 21:10:28.926 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-19 21:10:28.926325 | orchestrator | 21:10:28.926 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.926352 | orchestrator | 21:10:28.926 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.926382 | orchestrator | 21:10:28.926 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.926417 | orchestrator | 21:10:28.926 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.926452 | orchestrator | 21:10:28.926 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.926496 | orchestrator | 21:10:28.926 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-19 21:10:28.926530 | orchestrator | 21:10:28.926 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.926552 | orchestrator | 21:10:28.926 STDOUT terraform:  + size = 80 2025-05-19 21:10:28.926575 | orchestrator | 21:10:28.926 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.926590 | orchestrator | 21:10:28.926 STDOUT terraform:  } 2025-05-19 21:10:28.926640 | orchestrator | 21:10:28.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-19 21:10:28.926691 | orchestrator | 21:10:28.926 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.926728 | orchestrator | 21:10:28.926 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.926748 | orchestrator | 21:10:28.926 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.926783 | orchestrator | 21:10:28.926 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.926816 | orchestrator | 21:10:28.926 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.926889 | orchestrator | 21:10:28.926 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-19 21:10:28.926922 | orchestrator | 21:10:28.926 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.926939 | orchestrator | 21:10:28.926 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.926963 | orchestrator | 21:10:28.926 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.926970 | orchestrator | 21:10:28.926 STDOUT terraform:  } 2025-05-19 21:10:28.927024 | orchestrator | 21:10:28.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-19 21:10:28.927072 | orchestrator | 21:10:28.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.927105 | orchestrator | 21:10:28.927 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.927128 | orchestrator | 21:10:28.927 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.927163 | orchestrator | 21:10:28.927 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.927200 | orchestrator | 21:10:28.927 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.927243 | orchestrator | 21:10:28.927 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-19 21:10:28.927277 | orchestrator | 21:10:28.927 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.927301 | orchestrator | 21:10:28.927 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.927327 | orchestrator | 21:10:28.927 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.927336 | orchestrator | 21:10:28.927 STDOUT terraform:  } 2025-05-19 21:10:28.927383 | orchestrator | 21:10:28.927 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-19 21:10:28.927427 | orchestrator | 21:10:28.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.927460 | orchestrator | 21:10:28.927 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.927482 | orchestrator | 21:10:28.927 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.927516 | orchestrator | 21:10:28.927 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.927547 | orchestrator | 21:10:28.927 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.927584 | orchestrator | 21:10:28.927 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-19 21:10:28.927616 | orchestrator | 21:10:28.927 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.927636 | orchestrator | 21:10:28.927 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.927673 | orchestrator | 21:10:28.927 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.927680 | orchestrator | 21:10:28.927 STDOUT terraform:  } 2025-05-19 21:10:28.927728 | orchestrator | 21:10:28.927 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-19 21:10:28.927773 | orchestrator | 21:10:28.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.927805 | orchestrator | 21:10:28.927 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.927826 | orchestrator | 21:10:28.927 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.927868 | orchestrator | 21:10:28.927 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.927900 | orchestrator | 21:10:28.927 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.927942 | orchestrator | 21:10:28.927 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-19 21:10:28.927973 | orchestrator | 21:10:28.927 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.927994 | orchestrator | 21:10:28.927 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.928015 | orchestrator | 21:10:28.927 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.928024 | orchestrator | 21:10:28.928 STDOUT terraform:  } 2025-05-19 21:10:28.928070 | orchestrator | 21:10:28.928 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-19 21:10:28.928115 | orchestrator | 21:10:28.928 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.928145 | orchestrator | 21:10:28.928 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.928167 | orchestrator | 21:10:28.928 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.928199 | orchestrator | 21:10:28.928 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.928231 | orchestrator | 21:10:28.928 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.928269 | orchestrator | 21:10:28.928 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-19 21:10:28.928302 | orchestrator | 21:10:28.928 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.928327 | orchestrator | 21:10:28.928 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.928344 | orchestrator | 21:10:28.928 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.928358 | orchestrator | 21:10:28.928 STDOUT terraform:  } 2025-05-19 21:10:28.928405 | orchestrator | 21:10:28.928 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-19 21:10:28.928450 | orchestrator | 21:10:28.928 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.928481 | orchestrator | 21:10:28.928 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.928504 | orchestrator | 21:10:28.928 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.928538 | orchestrator | 21:10:28.928 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.928569 | orchestrator | 21:10:28.928 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.928609 | orchestrator | 21:10:28.928 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-19 21:10:28.928641 | orchestrator | 21:10:28.928 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.928663 | orchestrator | 21:10:28.928 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.928684 | orchestrator | 21:10:28.928 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.928691 | orchestrator | 21:10:28.928 STDOUT terraform:  } 2025-05-19 21:10:28.928784 | orchestrator | 21:10:28.928 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-19 21:10:28.928829 | orchestrator | 21:10:28.928 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.928881 | orchestrator | 21:10:28.928 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.928886 | orchestrator | 21:10:28.928 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.928913 | orchestrator | 21:10:28.928 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.928946 | orchestrator | 21:10:28.928 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.928985 | orchestrator | 21:10:28.928 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-19 21:10:28.929021 | orchestrator | 21:10:28.928 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.929038 | orchestrator | 21:10:28.929 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.929066 | orchestrator | 21:10:28.929 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.929073 | orchestrator | 21:10:28.929 STDOUT terraform:  } 2025-05-19 21:10:28.929134 | orchestrator | 21:10:28.929 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-19 21:10:28.929175 | orchestrator | 21:10:28.929 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.929206 | orchestrator | 21:10:28.929 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.929214 | orchestrator | 21:10:28.929 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.929256 | orchestrator | 21:10:28.929 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.929288 | orchestrator | 21:10:28.929 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.929328 | orchestrator | 21:10:28.929 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-19 21:10:28.929360 | orchestrator | 21:10:28.929 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.929367 | orchestrator | 21:10:28.929 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.929398 | orchestrator | 21:10:28.929 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.929403 | orchestrator | 21:10:28.929 STDOUT terraform:  } 2025-05-19 21:10:28.929466 | orchestrator | 21:10:28.929 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-19 21:10:28.929502 | orchestrator | 21:10:28.929 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-19 21:10:28.929529 | orchestrator | 21:10:28.929 STDOUT terraform:  + attachment = (known after apply) 2025-05-19 21:10:28.929554 | orchestrator | 21:10:28.929 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.929583 | orchestrator | 21:10:28.929 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.929615 | orchestrator | 21:10:28.929 STDOUT terraform:  + metadata = (known after apply) 2025-05-19 21:10:28.929653 | orchestrator | 21:10:28.929 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-19 21:10:28.929684 | orchestrator | 21:10:28.929 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.929704 | orchestrator | 21:10:28.929 STDOUT terraform:  + size = 20 2025-05-19 21:10:28.929711 | orchestrator | 21:10:28.929 STDOUT terraform:  + volume_type = "ssd" 2025-05-19 21:10:28.929722 | orchestrator | 21:10:28.929 STDOUT terraform:  } 2025-05-19 21:10:28.929776 | orchestrator | 21:10:28.929 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-19 21:10:28.929820 | orchestrator | 21:10:28.929 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-19 21:10:28.929880 | orchestrator | 21:10:28.929 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.929915 | orchestrator | 21:10:28.929 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.929950 | orchestrator | 21:10:28.929 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.929987 | orchestrator | 21:10:28.929 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.930007 | orchestrator | 21:10:28.929 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.930033 | orchestrator | 21:10:28.929 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.930078 | orchestrator | 21:10:28.930 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.930120 | orchestrator | 21:10:28.930 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.930177 | orchestrator | 21:10:28.930 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-19 21:10:28.930203 | orchestrator | 21:10:28.930 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.930241 | orchestrator | 21:10:28.930 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.930277 | orchestrator | 21:10:28.930 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.930314 | orchestrator | 21:10:28.930 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.930341 | orchestrator | 21:10:28.930 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.930374 | orchestrator | 21:10:28.930 STDOUT terraform:  + name = "testbed-manager" 2025-05-19 21:10:28.930400 | orchestrator | 21:10:28.930 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.930436 | orchestrator | 21:10:28.930 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.930473 | orchestrator | 21:10:28.930 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.930493 | orchestrator | 21:10:28.930 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.930531 | orchestrator | 21:10:28.930 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.930569 | orchestrator | 21:10:28.930 STDOUT terraform:  + user_data = (known after apply) 2025-05-19 21:10:28.930576 | orchestrator | 21:10:28.930 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.930608 | orchestrator | 21:10:28.930 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.930636 | orchestrator | 21:10:28.930 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.930666 | orchestrator | 21:10:28.930 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.930695 | orchestrator | 21:10:28.930 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.930726 | orchestrator | 21:10:28.930 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.930767 | orchestrator | 21:10:28.930 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.930773 | orchestrator | 21:10:28.930 STDOUT terraform:  } 2025-05-19 21:10:28.930784 | orchestrator | 21:10:28.930 STDOUT terraform:  + network { 2025-05-19 21:10:28.930811 | orchestrator | 21:10:28.930 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.930853 | orchestrator | 21:10:28.930 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.930883 | orchestrator | 21:10:28.930 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.930908 | orchestrator | 21:10:28.930 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.930938 | orchestrator | 21:10:28.930 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.930972 | orchestrator | 21:10:28.930 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.930996 | orchestrator | 21:10:28.930 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.931005 | orchestrator | 21:10:28.930 STDOUT terraform:  } 2025-05-19 21:10:28.931011 | orchestrator | 21:10:28.931 STDOUT terraform:  } 2025-05-19 21:10:28.931064 | orchestrator | 21:10:28.931 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-19 21:10:28.931107 | orchestrator | 21:10:28.931 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.931142 | orchestrator | 21:10:28.931 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.931179 | orchestrator | 21:10:28.931 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.931215 | orchestrator | 21:10:28.931 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.931250 | orchestrator | 21:10:28.931 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.931275 | orchestrator | 21:10:28.931 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.931294 | orchestrator | 21:10:28.931 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.931329 | orchestrator | 21:10:28.931 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.931366 | orchestrator | 21:10:28.931 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.931396 | orchestrator | 21:10:28.931 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.931420 | orchestrator | 21:10:28.931 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.931457 | orchestrator | 21:10:28.931 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.931494 | orchestrator | 21:10:28.931 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.931532 | orchestrator | 21:10:28.931 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.931557 | orchestrator | 21:10:28.931 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.931588 | orchestrator | 21:10:28.931 STDOUT terraform:  + name = "testbed-node-0" 2025-05-19 21:10:28.931614 | orchestrator | 21:10:28.931 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.931651 | orchestrator | 21:10:28.931 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.931687 | orchestrator | 21:10:28.931 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.931706 | orchestrator | 21:10:28.931 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.931743 | orchestrator | 21:10:28.931 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.931794 | orchestrator | 21:10:28.931 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.931800 | orchestrator | 21:10:28.931 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.931863 | orchestrator | 21:10:28.931 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.931869 | orchestrator | 21:10:28.931 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.931896 | orchestrator | 21:10:28.931 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.931924 | orchestrator | 21:10:28.931 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.931955 | orchestrator | 21:10:28.931 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.931994 | orchestrator | 21:10:28.931 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.932001 | orchestrator | 21:10:28.931 STDOUT terraform:  } 2025-05-19 21:10:28.932007 | orchestrator | 21:10:28.931 STDOUT terraform:  + network { 2025-05-19 21:10:28.932037 | orchestrator | 21:10:28.932 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.932069 | orchestrator | 21:10:28.932 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.932100 | orchestrator | 21:10:28.932 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.932133 | orchestrator | 21:10:28.932 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.932169 | orchestrator | 21:10:28.932 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.932198 | orchestrator | 21:10:28.932 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.932231 | orchestrator | 21:10:28.932 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.932237 | orchestrator | 21:10:28.932 STDOUT terraform:  } 2025-05-19 21:10:28.932243 | orchestrator | 21:10:28.932 STDOUT terraform:  } 2025-05-19 21:10:28.932294 | orchestrator | 21:10:28.932 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-19 21:10:28.932337 | orchestrator | 21:10:28.932 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.932372 | orchestrator | 21:10:28.932 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.932407 | orchestrator | 21:10:28.932 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.932443 | orchestrator | 21:10:28.932 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.932479 | orchestrator | 21:10:28.932 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.932503 | orchestrator | 21:10:28.932 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.932513 | orchestrator | 21:10:28.932 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.932555 | orchestrator | 21:10:28.932 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.932592 | orchestrator | 21:10:28.932 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.932622 | orchestrator | 21:10:28.932 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.932646 | orchestrator | 21:10:28.932 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.932682 | orchestrator | 21:10:28.932 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.932718 | orchestrator | 21:10:28.932 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.932755 | orchestrator | 21:10:28.932 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.932781 | orchestrator | 21:10:28.932 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.932812 | orchestrator | 21:10:28.932 STDOUT terraform:  + name = "testbed-node-1" 2025-05-19 21:10:28.932846 | orchestrator | 21:10:28.932 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.932923 | orchestrator | 21:10:28.932 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.932951 | orchestrator | 21:10:28.932 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.932975 | orchestrator | 21:10:28.932 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.933013 | orchestrator | 21:10:28.932 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.933064 | orchestrator | 21:10:28.933 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.933070 | orchestrator | 21:10:28.933 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.933104 | orchestrator | 21:10:28.933 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.933133 | orchestrator | 21:10:28.933 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.933165 | orchestrator | 21:10:28.933 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.933196 | orchestrator | 21:10:28.933 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.933227 | orchestrator | 21:10:28.933 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.933267 | orchestrator | 21:10:28.933 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.933273 | orchestrator | 21:10:28.933 STDOUT terraform:  } 2025-05-19 21:10:28.933279 | orchestrator | 21:10:28.933 STDOUT terraform:  + network { 2025-05-19 21:10:28.933308 | orchestrator | 21:10:28.933 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.933340 | orchestrator | 21:10:28.933 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.933372 | orchestrator | 21:10:28.933 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.933405 | orchestrator | 21:10:28.933 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.933436 | orchestrator | 21:10:28.933 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.933468 | orchestrator | 21:10:28.933 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.933501 | orchestrator | 21:10:28.933 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.933507 | orchestrator | 21:10:28.933 STDOUT terraform:  } 2025-05-19 21:10:28.933512 | orchestrator | 21:10:28.933 STDOUT terraform:  } 2025-05-19 21:10:28.933564 | orchestrator | 21:10:28.933 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-19 21:10:28.933607 | orchestrator | 21:10:28.933 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.933643 | orchestrator | 21:10:28.933 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.933678 | orchestrator | 21:10:28.933 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.933714 | orchestrator | 21:10:28.933 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.933752 | orchestrator | 21:10:28.933 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.933779 | orchestrator | 21:10:28.933 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.933786 | orchestrator | 21:10:28.933 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.933855 | orchestrator | 21:10:28.933 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.933892 | orchestrator | 21:10:28.933 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.933924 | orchestrator | 21:10:28.933 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.933943 | orchestrator | 21:10:28.933 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.933978 | orchestrator | 21:10:28.933 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.934026 | orchestrator | 21:10:28.933 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.934063 | orchestrator | 21:10:28.934 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.934089 | orchestrator | 21:10:28.934 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.934120 | orchestrator | 21:10:28.934 STDOUT terraform:  + name = "testbed-node-2" 2025-05-19 21:10:28.934145 | orchestrator | 21:10:28.934 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.934183 | orchestrator | 21:10:28.934 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.934220 | orchestrator | 21:10:28.934 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.934244 | orchestrator | 21:10:28.934 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.934280 | orchestrator | 21:10:28.934 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.934333 | orchestrator | 21:10:28.934 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.934340 | orchestrator | 21:10:28.934 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.934371 | orchestrator | 21:10:28.934 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.934398 | orchestrator | 21:10:28.934 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.934429 | orchestrator | 21:10:28.934 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.934458 | orchestrator | 21:10:28.934 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.934488 | orchestrator | 21:10:28.934 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.934529 | orchestrator | 21:10:28.934 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.934535 | orchestrator | 21:10:28.934 STDOUT terraform:  } 2025-05-19 21:10:28.934557 | orchestrator | 21:10:28.934 STDOUT terraform:  + network { 2025-05-19 21:10:28.934563 | orchestrator | 21:10:28.934 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.934603 | orchestrator | 21:10:28.934 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.934634 | orchestrator | 21:10:28.934 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.934669 | orchestrator | 21:10:28.934 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.934701 | orchestrator | 21:10:28.934 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.934734 | orchestrator | 21:10:28.934 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.934766 | orchestrator | 21:10:28.934 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.934772 | orchestrator | 21:10:28.934 STDOUT terraform:  } 2025-05-19 21:10:28.934779 | orchestrator | 21:10:28.934 STDOUT terraform:  } 2025-05-19 21:10:28.934843 | orchestrator | 21:10:28.934 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-19 21:10:28.934884 | orchestrator | 21:10:28.934 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.934921 | orchestrator | 21:10:28.934 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.934955 | orchestrator | 21:10:28.934 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.934992 | orchestrator | 21:10:28.934 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.935027 | orchestrator | 21:10:28.934 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.935052 | orchestrator | 21:10:28.935 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.935070 | orchestrator | 21:10:28.935 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.935106 | orchestrator | 21:10:28.935 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.935142 | orchestrator | 21:10:28.935 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.935172 | orchestrator | 21:10:28.935 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.935196 | orchestrator | 21:10:28.935 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.935233 | orchestrator | 21:10:28.935 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.935270 | orchestrator | 21:10:28.935 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.935306 | orchestrator | 21:10:28.935 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.935328 | orchestrator | 21:10:28.935 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.935360 | orchestrator | 21:10:28.935 STDOUT terraform:  + name = "testbed-node-3" 2025-05-19 21:10:28.935385 | orchestrator | 21:10:28.935 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.935422 | orchestrator | 21:10:28.935 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.935456 | orchestrator | 21:10:28.935 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.935481 | orchestrator | 21:10:28.935 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.935518 | orchestrator | 21:10:28.935 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.935569 | orchestrator | 21:10:28.935 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.935578 | orchestrator | 21:10:28.935 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.935606 | orchestrator | 21:10:28.935 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.935635 | orchestrator | 21:10:28.935 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.935665 | orchestrator | 21:10:28.935 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.935694 | orchestrator | 21:10:28.935 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.935725 | orchestrator | 21:10:28.935 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.935764 | orchestrator | 21:10:28.935 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.935770 | orchestrator | 21:10:28.935 STDOUT terraform:  } 2025-05-19 21:10:28.935776 | orchestrator | 21:10:28.935 STDOUT terraform:  + network { 2025-05-19 21:10:28.935806 | orchestrator | 21:10:28.935 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.935859 | orchestrator | 21:10:28.935 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.935888 | orchestrator | 21:10:28.935 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.935919 | orchestrator | 21:10:28.935 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.935951 | orchestrator | 21:10:28.935 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.935983 | orchestrator | 21:10:28.935 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.936015 | orchestrator | 21:10:28.935 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.936021 | orchestrator | 21:10:28.936 STDOUT terraform:  } 2025-05-19 21:10:28.936026 | orchestrator | 21:10:28.936 STDOUT terraform:  } 2025-05-19 21:10:28.936079 | orchestrator | 21:10:28.936 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-19 21:10:28.936121 | orchestrator | 21:10:28.936 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.936156 | orchestrator | 21:10:28.936 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.936192 | orchestrator | 21:10:28.936 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.936230 | orchestrator | 21:10:28.936 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.936265 | orchestrator | 21:10:28.936 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.936289 | orchestrator | 21:10:28.936 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.936308 | orchestrator | 21:10:28.936 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.936346 | orchestrator | 21:10:28.936 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.936381 | orchestrator | 21:10:28.936 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.936412 | orchestrator | 21:10:28.936 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.936438 | orchestrator | 21:10:28.936 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.936475 | orchestrator | 21:10:28.936 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.936510 | orchestrator | 21:10:28.936 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.936545 | orchestrator | 21:10:28.936 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.936570 | orchestrator | 21:10:28.936 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.936602 | orchestrator | 21:10:28.936 STDOUT terraform:  + name = "testbed-node-4" 2025-05-19 21:10:28.936627 | orchestrator | 21:10:28.936 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.936662 | orchestrator | 21:10:28.936 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.936697 | orchestrator | 21:10:28.936 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.936721 | orchestrator | 21:10:28.936 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.936759 | orchestrator | 21:10:28.936 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.936809 | orchestrator | 21:10:28.936 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.936816 | orchestrator | 21:10:28.936 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.936868 | orchestrator | 21:10:28.936 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.936895 | orchestrator | 21:10:28.936 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.936926 | orchestrator | 21:10:28.936 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.936955 | orchestrator | 21:10:28.936 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.936984 | orchestrator | 21:10:28.936 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.937024 | orchestrator | 21:10:28.936 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.937030 | orchestrator | 21:10:28.937 STDOUT terraform:  } 2025-05-19 21:10:28.937053 | orchestrator | 21:10:28.937 STDOUT terraform:  + network { 2025-05-19 21:10:28.937059 | orchestrator | 21:10:28.937 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.937096 | orchestrator | 21:10:28.937 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.937127 | orchestrator | 21:10:28.937 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.937162 | orchestrator | 21:10:28.937 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.937195 | orchestrator | 21:10:28.937 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.937226 | orchestrator | 21:10:28.937 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.937259 | orchestrator | 21:10:28.937 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.937265 | orchestrator | 21:10:28.937 STDOUT terraform:  } 2025-05-19 21:10:28.937270 | orchestrator | 21:10:28.937 STDOUT terraform:  } 2025-05-19 21:10:28.937322 | orchestrator | 21:10:28.937 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-19 21:10:28.937365 | orchestrator | 21:10:28.937 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-19 21:10:28.937400 | orchestrator | 21:10:28.937 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-19 21:10:28.937436 | orchestrator | 21:10:28.937 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-19 21:10:28.937472 | orchestrator | 21:10:28.937 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-19 21:10:28.937508 | orchestrator | 21:10:28.937 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.937532 | orchestrator | 21:10:28.937 STDOUT terraform:  + availability_zone = "nova" 2025-05-19 21:10:28.937554 | orchestrator | 21:10:28.937 STDOUT terraform:  + config_drive = true 2025-05-19 21:10:28.937586 | orchestrator | 21:10:28.937 STDOUT terraform:  + created = (known after apply) 2025-05-19 21:10:28.937621 | orchestrator | 21:10:28.937 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-19 21:10:28.937651 | orchestrator | 21:10:28.937 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-19 21:10:28.937675 | orchestrator | 21:10:28.937 STDOUT terraform:  + force_delete = false 2025-05-19 21:10:28.937711 | orchestrator | 21:10:28.937 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.937747 | orchestrator | 21:10:28.937 STDOUT terraform:  + image_id = (known after apply) 2025-05-19 21:10:28.937784 | orchestrator | 21:10:28.937 STDOUT terraform:  + image_name = (known after apply) 2025-05-19 21:10:28.937820 | orchestrator | 21:10:28.937 STDOUT terraform:  + key_pair = "testbed" 2025-05-19 21:10:28.937916 | orchestrator | 21:10:28.937 STDOUT terraform:  + name = "testbed-node-5" 2025-05-19 21:10:28.937937 | orchestrator | 21:10:28.937 STDOUT terraform:  + power_state = "active" 2025-05-19 21:10:28.937943 | orchestrator | 21:10:28.937 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.937947 | orchestrator | 21:10:28.937 STDOUT terraform:  + security_groups = (known after apply) 2025-05-19 21:10:28.937957 | orchestrator | 21:10:28.937 STDOUT terraform:  + stop_before_destroy = false 2025-05-19 21:10:28.937999 | orchestrator | 21:10:28.937 STDOUT terraform:  + updated = (known after apply) 2025-05-19 21:10:28.938071 | orchestrator | 21:10:28.937 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-19 21:10:28.938083 | orchestrator | 21:10:28.938 STDOUT terraform:  + block_device { 2025-05-19 21:10:28.938112 | orchestrator | 21:10:28.938 STDOUT terraform:  + boot_index = 0 2025-05-19 21:10:28.938140 | orchestrator | 21:10:28.938 STDOUT terraform:  + delete_on_termination = false 2025-05-19 21:10:28.938170 | orchestrator | 21:10:28.938 STDOUT terraform:  + destination_type = "volume" 2025-05-19 21:10:28.938199 | orchestrator | 21:10:28.938 STDOUT terraform:  + multiattach = false 2025-05-19 21:10:28.938230 | orchestrator | 21:10:28.938 STDOUT terraform:  + source_type = "volume" 2025-05-19 21:10:28.938271 | orchestrator | 21:10:28.938 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.938277 | orchestrator | 21:10:28.938 STDOUT terraform:  } 2025-05-19 21:10:28.938300 | orchestrator | 21:10:28.938 STDOUT terraform:  + network { 2025-05-19 21:10:28.938323 | orchestrator | 21:10:28.938 STDOUT terraform:  + access_network = false 2025-05-19 21:10:28.938351 | orchestrator | 21:10:28.938 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-19 21:10:28.938385 | orchestrator | 21:10:28.938 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-19 21:10:28.938418 | orchestrator | 21:10:28.938 STDOUT terraform:  + mac = (known after apply) 2025-05-19 21:10:28.938450 | orchestrator | 21:10:28.938 STDOUT terraform:  + name = (known after apply) 2025-05-19 21:10:28.938479 | orchestrator | 21:10:28.938 STDOUT terraform:  + port = (known after apply) 2025-05-19 21:10:28.938511 | orchestrator | 21:10:28.938 STDOUT terraform:  + uuid = (known after apply) 2025-05-19 21:10:28.938518 | orchestrator | 21:10:28.938 STDOUT terraform:  } 2025-05-19 21:10:28.938527 | orchestrator | 21:10:28.938 STDOUT terraform:  } 2025-05-19 21:10:28.938570 | orchestrator | 21:10:28.938 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-19 21:10:28.938606 | orchestrator | 21:10:28.938 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-19 21:10:28.938634 | orchestrator | 21:10:28.938 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-19 21:10:28.938665 | orchestrator | 21:10:28.938 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.938683 | orchestrator | 21:10:28.938 STDOUT terraform:  + name = "testbed" 2025-05-19 21:10:28.938707 | orchestrator | 21:10:28.938 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 21:10:28.938736 | orchestrator | 21:10:28.938 STDOUT terraform:  + public_key = (known after apply) 2025-05-19 21:10:28.938765 | orchestrator | 21:10:28.938 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.938795 | orchestrator | 21:10:28.938 STDOUT terraform:  + user_id = (known after apply) 2025-05-19 21:10:28.938801 | orchestrator | 21:10:28.938 STDOUT terraform:  } 2025-05-19 21:10:28.938870 | orchestrator | 21:10:28.938 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-19 21:10:28.938918 | orchestrator | 21:10:28.938 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.938949 | orchestrator | 21:10:28.938 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.938978 | orchestrator | 21:10:28.938 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.939007 | orchestrator | 21:10:28.938 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.939037 | orchestrator | 21:10:28.938 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.939065 | orchestrator | 21:10:28.939 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.939071 | orchestrator | 21:10:28.939 STDOUT terraform:  } 2025-05-19 21:10:28.939125 | orchestrator | 21:10:28.939 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-19 21:10:28.939174 | orchestrator | 21:10:28.939 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.939203 | orchestrator | 21:10:28.939 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.939233 | orchestrator | 21:10:28.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.939261 | orchestrator | 21:10:28.939 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.939290 | orchestrator | 21:10:28.939 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.939318 | orchestrator | 21:10:28.939 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.939324 | orchestrator | 21:10:28.939 STDOUT terraform:  } 2025-05-19 21:10:28.939377 | orchestrator | 21:10:28.939 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-19 21:10:28.939428 | orchestrator | 21:10:28.939 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.939457 | orchestrator | 21:10:28.939 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.939486 | orchestrator | 21:10:28.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.939515 | orchestrator | 21:10:28.939 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.939544 | orchestrator | 21:10:28.939 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.939573 | orchestrator | 21:10:28.939 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.939579 | orchestrator | 21:10:28.939 STDOUT terraform:  } 2025-05-19 21:10:28.939632 | orchestrator | 21:10:28.939 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-19 21:10:28.939681 | orchestrator | 21:10:28.939 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.939710 | orchestrator | 21:10:28.939 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.939741 | orchestrator | 21:10:28.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.939769 | orchestrator | 21:10:28.939 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.939797 | orchestrator | 21:10:28.939 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.939840 | orchestrator | 21:10:28.939 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.939847 | orchestrator | 21:10:28.939 STDOUT terraform:  } 2025-05-19 21:10:28.939897 | orchestrator | 21:10:28.939 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-19 21:10:28.939946 | orchestrator | 21:10:28.939 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.939975 | orchestrator | 21:10:28.939 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.940004 | orchestrator | 21:10:28.939 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.940033 | orchestrator | 21:10:28.939 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.940063 | orchestrator | 21:10:28.940 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.940091 | orchestrator | 21:10:28.940 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.940098 | orchestrator | 21:10:28.940 STDOUT terraform:  } 2025-05-19 21:10:28.940150 | orchestrator | 21:10:28.940 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-19 21:10:28.940199 | orchestrator | 21:10:28.940 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.940228 | orchestrator | 21:10:28.940 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.940257 | orchestrator | 21:10:28.940 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.940287 | orchestrator | 21:10:28.940 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.940316 | orchestrator | 21:10:28.940 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.940345 | orchestrator | 21:10:28.940 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.940351 | orchestrator | 21:10:28.940 STDOUT terraform:  } 2025-05-19 21:10:28.940406 | orchestrator | 21:10:28.940 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-19 21:10:28.940455 | orchestrator | 21:10:28.940 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.940486 | orchestrator | 21:10:28.940 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.940514 | orchestrator | 21:10:28.940 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.940543 | orchestrator | 21:10:28.940 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.940571 | orchestrator | 21:10:28.940 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.940600 | orchestrator | 21:10:28.940 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.940606 | orchestrator | 21:10:28.940 STDOUT terraform:  } 2025-05-19 21:10:28.940660 | orchestrator | 21:10:28.940 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-19 21:10:28.940709 | orchestrator | 21:10:28.940 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.940739 | orchestrator | 21:10:28.940 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.940768 | orchestrator | 21:10:28.940 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.940797 | orchestrator | 21:10:28.940 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.940826 | orchestrator | 21:10:28.940 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.940979 | orchestrator | 21:10:28.940 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.941045 | orchestrator | 21:10:28.940 STDOUT terraform:  } 2025-05-19 21:10:28.941062 | orchestrator | 21:10:28.940 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-19 21:10:28.941077 | orchestrator | 21:10:28.940 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-19 21:10:28.941083 | orchestrator | 21:10:28.940 STDOUT terraform:  + device = (known after apply) 2025-05-19 21:10:28.941089 | orchestrator | 21:10:28.940 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.941095 | orchestrator | 21:10:28.941 STDOUT terraform:  + instance_id = (known after apply) 2025-05-19 21:10:28.941101 | orchestrator | 21:10:28.941 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.941109 | orchestrator | 21:10:28.941 STDOUT terraform:  + volume_id = (known after apply) 2025-05-19 21:10:28.941115 | orchestrator | 21:10:28.941 STDOUT terraform:  } 2025-05-19 21:10:28.941185 | orchestrator | 21:10:28.941 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-19 21:10:28.941233 | orchestrator | 21:10:28.941 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-19 21:10:28.941260 | orchestrator | 21:10:28.941 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 21:10:28.941293 | orchestrator | 21:10:28.941 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-19 21:10:28.941302 | orchestrator | 21:10:28.941 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.941342 | orchestrator | 21:10:28.941 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 21:10:28.941368 | orchestrator | 21:10:28.941 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.941375 | orchestrator | 21:10:28.941 STDOUT terraform:  } 2025-05-19 21:10:28.941426 | orchestrator | 21:10:28.941 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-19 21:10:28.941474 | orchestrator | 21:10:28.941 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-19 21:10:28.941507 | orchestrator | 21:10:28.941 STDOUT terraform:  + address = (known after apply) 2025-05-19 21:10:28.941516 | orchestrator | 21:10:28.941 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.941549 | orchestrator | 21:10:28.941 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 21:10:28.941558 | orchestrator | 21:10:28.941 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.941595 | orchestrator | 21:10:28.941 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-19 21:10:28.941629 | orchestrator | 21:10:28.941 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.941636 | orchestrator | 21:10:28.941 STDOUT terraform:  + pool = "public" 2025-05-19 21:10:28.941661 | orchestrator | 21:10:28.941 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 21:10:28.941684 | orchestrator | 21:10:28.941 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.941692 | orchestrator | 21:10:28.941 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.941728 | orchestrator | 21:10:28.941 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.941735 | orchestrator | 21:10:28.941 STDOUT terraform:  } 2025-05-19 21:10:28.941781 | orchestrator | 21:10:28.941 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-19 21:10:28.941826 | orchestrator | 21:10:28.941 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-19 21:10:28.941873 | orchestrator | 21:10:28.941 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.941916 | orchestrator | 21:10:28.941 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.941925 | orchestrator | 21:10:28.941 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 21:10:28.941957 | orchestrator | 21:10:28.941 STDOUT terraform:  + "nova", 2025-05-19 21:10:28.941964 | orchestrator | 21:10:28.941 STDOUT terraform:  ] 2025-05-19 21:10:28.941999 | orchestrator | 21:10:28.941 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-19 21:10:28.942052 | orchestrator | 21:10:28.941 STDOUT terraform:  + external = (known after apply) 2025-05-19 21:10:28.942088 | orchestrator | 21:10:28.942 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.942127 | orchestrator | 21:10:28.942 STDOUT terraform:  + mtu = (known after apply) 2025-05-19 21:10:28.942166 | orchestrator | 21:10:28.942 STDOUT terraform:  + name = "net-testbed-management" 2025-05-19 21:10:28.942203 | orchestrator | 21:10:28.942 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.942241 | orchestrator | 21:10:28.942 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.942280 | orchestrator | 21:10:28.942 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.942318 | orchestrator | 21:10:28.942 STDOUT terraform:  + shared = (known after apply) 2025-05-19 21:10:28.942356 | orchestrator | 21:10:28.942 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.942403 | orchestrator | 21:10:28.942 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-19 21:10:28.942413 | orchestrator | 21:10:28.942 STDOUT terraform:  + segments (known after apply) 2025-05-19 21:10:28.942419 | orchestrator | 21:10:28.942 STDOUT terraform:  } 2025-05-19 21:10:28.942471 | orchestrator | 21:10:28.942 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-19 21:10:28.942518 | orchestrator | 21:10:28.942 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-19 21:10:28.942555 | orchestrator | 21:10:28.942 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.942592 | orchestrator | 21:10:28.942 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.942628 | orchestrator | 21:10:28.942 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.942666 | orchestrator | 21:10:28.942 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.942703 | orchestrator | 21:10:28.942 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.942741 | orchestrator | 21:10:28.942 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.942777 | orchestrator | 21:10:28.942 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.942814 | orchestrator | 21:10:28.942 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.942878 | orchestrator | 21:10:28.942 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.942915 | orchestrator | 21:10:28.942 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.942952 | orchestrator | 21:10:28.942 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.942988 | orchestrator | 21:10:28.942 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.943026 | orchestrator | 21:10:28.942 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.943065 | orchestrator | 21:10:28.943 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.943101 | orchestrator | 21:10:28.943 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.943138 | orchestrator | 21:10:28.943 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.943147 | orchestrator | 21:10:28.943 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.943184 | orchestrator | 21:10:28.943 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.943191 | orchestrator | 21:10:28.943 STDOUT terraform:  } 2025-05-19 21:10:28.943199 | orchestrator | 21:10:28.943 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.943239 | orchestrator | 21:10:28.943 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.943246 | orchestrator | 21:10:28.943 STDOUT terraform:  } 2025-05-19 21:10:28.943275 | orchestrator | 21:10:28.943 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.943282 | orchestrator | 21:10:28.943 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.943290 | orchestrator | 21:10:28.943 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-19 21:10:28.943332 | orchestrator | 21:10:28.943 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.943340 | orchestrator | 21:10:28.943 STDOUT terraform:  } 2025-05-19 21:10:28.943347 | orchestrator | 21:10:28.943 STDOUT terraform:  } 2025-05-19 21:10:28.943402 | orchestrator | 21:10:28.943 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-19 21:10:28.945492 | orchestrator | 21:10:28.943 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.945532 | orchestrator | 21:10:28.943 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.945539 | orchestrator | 21:10:28.943 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.945544 | orchestrator | 21:10:28.943 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.945549 | orchestrator | 21:10:28.943 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.945565 | orchestrator | 21:10:28.943 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.945571 | orchestrator | 21:10:28.943 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.945576 | orchestrator | 21:10:28.943 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.945582 | orchestrator | 21:10:28.943 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.945587 | orchestrator | 21:10:28.943 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.945592 | orchestrator | 21:10:28.943 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.945598 | orchestrator | 21:10:28.943 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.945603 | orchestrator | 21:10:28.943 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.945608 | orchestrator | 21:10:28.943 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.945614 | orchestrator | 21:10:28.943 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.945619 | orchestrator | 21:10:28.943 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.945624 | orchestrator | 21:10:28.943 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.945630 | orchestrator | 21:10:28.943 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945636 | orchestrator | 21:10:28.943 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.945642 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945647 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945652 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.945658 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945663 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945668 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.945674 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945679 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945684 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.945690 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945695 | orchestrator | 21:10:28.944 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.945700 | orchestrator | 21:10:28.944 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.945706 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-19 21:10:28.945711 | orchestrator | 21:10:28.944 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.945716 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945722 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945728 | orchestrator | 21:10:28.944 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-19 21:10:28.945738 | orchestrator | 21:10:28.944 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.945744 | orchestrator | 21:10:28.944 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.945762 | orchestrator | 21:10:28.944 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.945767 | orchestrator | 21:10:28.944 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.945773 | orchestrator | 21:10:28.944 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.945778 | orchestrator | 21:10:28.944 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.945783 | orchestrator | 21:10:28.944 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.945789 | orchestrator | 21:10:28.944 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.945794 | orchestrator | 21:10:28.944 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.945799 | orchestrator | 21:10:28.944 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.945804 | orchestrator | 21:10:28.944 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.945810 | orchestrator | 21:10:28.944 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.945815 | orchestrator | 21:10:28.944 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.945821 | orchestrator | 21:10:28.944 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.945826 | orchestrator | 21:10:28.944 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.945846 | orchestrator | 21:10:28.944 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.945852 | orchestrator | 21:10:28.944 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.945857 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945863 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.945868 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945873 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945879 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.945884 | orchestrator | 21:10:28.944 STDOUT terraform:  } 2025-05-19 21:10:28.945889 | orchestrator | 21:10:28.944 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945901 | orchestrator | 21:10:28.944 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.945906 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.945912 | orchestrator | 21:10:28.945 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.945917 | orchestrator | 21:10:28.945 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.945923 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.945928 | orchestrator | 21:10:28.945 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.945938 | orchestrator | 21:10:28.945 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.945944 | orchestrator | 21:10:28.945 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-19 21:10:28.945950 | orchestrator | 21:10:28.945 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.945955 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.945961 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.945969 | orchestrator | 21:10:28.945 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-19 21:10:28.945975 | orchestrator | 21:10:28.945 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.945980 | orchestrator | 21:10:28.945 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.945986 | orchestrator | 21:10:28.945 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.945991 | orchestrator | 21:10:28.945 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.945996 | orchestrator | 21:10:28.945 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.946005 | orchestrator | 21:10:28.945 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.946011 | orchestrator | 21:10:28.945 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.946057 | orchestrator | 21:10:28.945 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.946067 | orchestrator | 21:10:28.945 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.946078 | orchestrator | 21:10:28.945 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.946087 | orchestrator | 21:10:28.945 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.946097 | orchestrator | 21:10:28.945 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.946107 | orchestrator | 21:10:28.945 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.946117 | orchestrator | 21:10:28.945 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.946126 | orchestrator | 21:10:28.945 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.946136 | orchestrator | 21:10:28.945 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.946146 | orchestrator | 21:10:28.945 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.946155 | orchestrator | 21:10:28.945 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.946165 | orchestrator | 21:10:28.945 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.946174 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.946184 | orchestrator | 21:10:28.945 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.946194 | orchestrator | 21:10:28.945 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.946204 | orchestrator | 21:10:28.945 STDOUT terraform:  } 2025-05-19 21:10:28.946223 | orchestrator | 21:10:28.945 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.946233 | orchestrator | 21:10:28.945 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.946246 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.946256 | orchestrator | 21:10:28.946 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.946266 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.946275 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.946285 | orchestrator | 21:10:28.946 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.946294 | orchestrator | 21:10:28.946 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.946304 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-19 21:10:28.946313 | orchestrator | 21:10:28.946 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.946323 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.946332 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.946342 | orchestrator | 21:10:28.946 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-19 21:10:28.946354 | orchestrator | 21:10:28.946 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.946369 | orchestrator | 21:10:28.946 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.946379 | orchestrator | 21:10:28.946 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.946391 | orchestrator | 21:10:28.946 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.946401 | orchestrator | 21:10:28.946 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.946443 | orchestrator | 21:10:28.946 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.946465 | orchestrator | 21:10:28.946 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.946523 | orchestrator | 21:10:28.946 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.946536 | orchestrator | 21:10:28.946 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.948869 | orchestrator | 21:10:28.946 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.948903 | orchestrator | 21:10:28.946 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.948912 | orchestrator | 21:10:28.946 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.948921 | orchestrator | 21:10:28.946 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.948929 | orchestrator | 21:10:28.946 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.948937 | orchestrator | 21:10:28.946 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.948945 | orchestrator | 21:10:28.946 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.948954 | orchestrator | 21:10:28.946 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.948973 | orchestrator | 21:10:28.946 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.948982 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.948992 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.949000 | orchestrator | 21:10:28.946 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949008 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.949016 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.949024 | orchestrator | 21:10:28.946 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949032 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.949040 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.949049 | orchestrator | 21:10:28.946 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949057 | orchestrator | 21:10:28.946 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.949066 | orchestrator | 21:10:28.946 STDOUT terraform:  } 2025-05-19 21:10:28.949074 | orchestrator | 21:10:28.947 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.949083 | orchestrator | 21:10:28.947 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.949091 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-19 21:10:28.949100 | orchestrator | 21:10:28.947 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.949109 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949117 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949126 | orchestrator | 21:10:28.947 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-19 21:10:28.949134 | orchestrator | 21:10:28.947 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.949143 | orchestrator | 21:10:28.947 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.949151 | orchestrator | 21:10:28.947 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.949160 | orchestrator | 21:10:28.947 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.949168 | orchestrator | 21:10:28.947 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.949177 | orchestrator | 21:10:28.947 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.949185 | orchestrator | 21:10:28.947 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.949193 | orchestrator | 21:10:28.947 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.949202 | orchestrator | 21:10:28.947 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.949210 | orchestrator | 21:10:28.947 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.949217 | orchestrator | 21:10:28.947 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.949235 | orchestrator | 21:10:28.947 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.949250 | orchestrator | 21:10:28.947 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.949257 | orchestrator | 21:10:28.947 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.949265 | orchestrator | 21:10:28.947 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.949273 | orchestrator | 21:10:28.947 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.949280 | orchestrator | 21:10:28.947 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.949287 | orchestrator | 21:10:28.947 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949295 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.949303 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949310 | orchestrator | 21:10:28.947 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949318 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.949322 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949327 | orchestrator | 21:10:28.947 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949340 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.949348 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949356 | orchestrator | 21:10:28.947 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949365 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.949373 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949381 | orchestrator | 21:10:28.947 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.949389 | orchestrator | 21:10:28.947 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.949396 | orchestrator | 21:10:28.947 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-19 21:10:28.949403 | orchestrator | 21:10:28.947 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.949411 | orchestrator | 21:10:28.947 STDOUT terraform:  } 2025-05-19 21:10:28.949418 | orchestrator | 21:10:28.948 STDOUT terraform:  } 2025-05-19 21:10:28.949426 | orchestrator | 21:10:28.948 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-19 21:10:28.949433 | orchestrator | 21:10:28.948 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-19 21:10:28.949441 | orchestrator | 21:10:28.948 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.949449 | orchestrator | 21:10:28.948 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-19 21:10:28.949457 | orchestrator | 21:10:28.948 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-19 21:10:28.949465 | orchestrator | 21:10:28.949 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.949474 | orchestrator | 21:10:28.949 STDOUT terraform:  + device_id = (known after apply) 2025-05-19 21:10:28.949482 | orchestrator | 21:10:28.949 STDOUT terraform:  + device_owner = (known after apply) 2025-05-19 21:10:28.949502 | orchestrator | 21:10:28.949 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-19 21:10:28.949510 | orchestrator | 21:10:28.949 STDOUT terraform:  + dns_name = (known after apply) 2025-05-19 21:10:28.949519 | orchestrator | 21:10:28.949 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.949531 | orchestrator | 21:10:28.949 STDOUT terraform:  + mac_address = (known after apply) 2025-05-19 21:10:28.949540 | orchestrator | 21:10:28.949 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.949548 | orchestrator | 21:10:28.949 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-19 21:10:28.949556 | orchestrator | 21:10:28.949 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-19 21:10:28.949564 | orchestrator | 21:10:28.949 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.949573 | orchestrator | 21:10:28.949 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-19 21:10:28.949581 | orchestrator | 21:10:28.949 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.949589 | orchestrator | 21:10:28.949 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949597 | orchestrator | 21:10:28.949 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-19 21:10:28.949606 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949614 | orchestrator | 21:10:28.949 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949626 | orchestrator | 21:10:28.949 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-19 21:10:28.949634 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949643 | orchestrator | 21:10:28.949 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949651 | orchestrator | 21:10:28.949 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-19 21:10:28.949660 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949671 | orchestrator | 21:10:28.949 STDOUT terraform:  + allowed_address_pairs { 2025-05-19 21:10:28.949679 | orchestrator | 21:10:28.949 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-19 21:10:28.949687 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949699 | orchestrator | 21:10:28.949 STDOUT terraform:  + binding (known after apply) 2025-05-19 21:10:28.949707 | orchestrator | 21:10:28.949 STDOUT terraform:  + fixed_ip { 2025-05-19 21:10:28.949719 | orchestrator | 21:10:28.949 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-19 21:10:28.949760 | orchestrator | 21:10:28.949 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.949770 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949782 | orchestrator | 21:10:28.949 STDOUT terraform:  } 2025-05-19 21:10:28.949828 | orchestrator | 21:10:28.949 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-19 21:10:28.949886 | orchestrator | 21:10:28.949 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-19 21:10:28.949895 | orchestrator | 21:10:28.949 STDOUT terraform:  + force_destroy = false 2025-05-19 21:10:28.949917 | orchestrator | 21:10:28.949 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.949952 | orchestrator | 21:10:28.949 STDOUT terraform:  + port_id = (known after apply) 2025-05-19 21:10:28.949964 | orchestrator | 21:10:28.949 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.950007 | orchestrator | 21:10:28.949 STDOUT terraform:  + router_id = (known after apply) 2025-05-19 21:10:28.950061 | orchestrator | 21:10:28.949 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-19 21:10:28.950073 | orchestrator | 21:10:28.950 STDOUT terraform:  } 2025-05-19 21:10:28.950122 | orchestrator | 21:10:28.950 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-19 21:10:28.950153 | orchestrator | 21:10:28.950 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-19 21:10:28.950199 | orchestrator | 21:10:28.950 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-19 21:10:28.950241 | orchestrator | 21:10:28.950 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.950252 | orchestrator | 21:10:28.950 STDOUT terraform:  + availability_zone_hints = [ 2025-05-19 21:10:28.950263 | orchestrator | 21:10:28.950 STDOUT terraform:  + "nova", 2025-05-19 21:10:28.950272 | orchestrator | 21:10:28.950 STDOUT terraform:  ] 2025-05-19 21:10:28.950312 | orchestrator | 21:10:28.950 STDOUT terraform:  + distributed = (known after apply) 2025-05-19 21:10:28.950345 | orchestrator | 21:10:28.950 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-19 21:10:28.950397 | orchestrator | 21:10:28.950 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-19 21:10:28.950430 | orchestrator | 21:10:28.950 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.950442 | orchestrator | 21:10:28.950 STDOUT terraform:  + name = "testbed" 2025-05-19 21:10:28.950495 | orchestrator | 21:10:28.950 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.950528 | orchestrator | 21:10:28.950 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.950540 | orchestrator | 21:10:28.950 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-19 21:10:28.950551 | orchestrator | 21:10:28.950 STDOUT terraform:  } 2025-05-19 21:10:28.950621 | orchestrator | 21:10:28.950 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-19 21:10:28.950675 | orchestrator | 21:10:28.950 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-19 21:10:28.950687 | orchestrator | 21:10:28.950 STDOUT terraform:  + description = "ssh" 2025-05-19 21:10:28.950698 | orchestrator | 21:10:28.950 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.950730 | orchestrator | 21:10:28.950 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.950742 | orchestrator | 21:10:28.950 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.950774 | orchestrator | 21:10:28.950 STDOUT terraform:  + port_range_max = 22 2025-05-19 21:10:28.950786 | orchestrator | 21:10:28.950 STDOUT terraform:  + port_range_min = 22 2025-05-19 21:10:28.950804 | orchestrator | 21:10:28.950 STDOUT terraform:  + protocol = "tcp" 2025-05-19 21:10:28.950850 | orchestrator | 21:10:28.950 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.950862 | orchestrator | 21:10:28.950 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.950900 | orchestrator | 21:10:28.950 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.950913 | orchestrator | 21:10:28.950 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.950956 | orchestrator | 21:10:28.950 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.950969 | orchestrator | 21:10:28.950 STDOUT terraform:  } 2025-05-19 21:10:28.951022 | orchestrator | 21:10:28.950 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-19 21:10:28.951075 | orchestrator | 21:10:28.951 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-19 21:10:28.951088 | orchestrator | 21:10:28.951 STDOUT terraform:  + description = "wireguard" 2025-05-19 21:10:28.951119 | orchestrator | 21:10:28.951 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.951130 | orchestrator | 21:10:28.951 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.951161 | orchestrator | 21:10:28.951 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.951172 | orchestrator | 21:10:28.951 STDOUT terraform:  + port_range_max = 51820 2025-05-19 21:10:28.951184 | orchestrator | 21:10:28.951 STDOUT terraform:  + port_range_min = 51820 2025-05-19 21:10:28.951222 | orchestrator | 21:10:28.951 STDOUT terraform:  + protocol = "udp" 2025-05-19 21:10:28.951235 | orchestrator | 21:10:28.951 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.951274 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.951286 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.951326 | orchestrator | 21:10:28.951 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.951339 | orchestrator | 21:10:28.951 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.951350 | orchestrator | 21:10:28.951 STDOUT terraform:  } 2025-05-19 21:10:28.951418 | orchestrator | 21:10:28.951 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-19 21:10:28.951471 | orchestrator | 21:10:28.951 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-19 21:10:28.951484 | orchestrator | 21:10:28.951 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.951496 | orchestrator | 21:10:28.951 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.951542 | orchestrator | 21:10:28.951 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.951555 | orchestrator | 21:10:28.951 STDOUT terraform:  + protocol = "tcp" 2025-05-19 21:10:28.951586 | orchestrator | 21:10:28.951 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.951606 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.951638 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 21:10:28.951678 | orchestrator | 21:10:28.951 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.951691 | orchestrator | 21:10:28.951 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.951702 | orchestrator | 21:10:28.951 STDOUT terraform:  } 2025-05-19 21:10:28.951761 | orchestrator | 21:10:28.951 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-19 21:10:28.951815 | orchestrator | 21:10:28.951 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-19 21:10:28.951828 | orchestrator | 21:10:28.951 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.951874 | orchestrator | 21:10:28.951 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.951904 | orchestrator | 21:10:28.951 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.951924 | orchestrator | 21:10:28.951 STDOUT terraform:  + protocol = "udp" 2025-05-19 21:10:28.951935 | orchestrator | 21:10:28.951 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.951980 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.951993 | orchestrator | 21:10:28.951 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-19 21:10:28.952036 | orchestrator | 21:10:28.951 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.952049 | orchestrator | 21:10:28.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.952060 | orchestrator | 21:10:28.952 STDOUT terraform:  } 2025-05-19 21:10:28.952127 | orchestrator | 21:10:28.952 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-19 21:10:28.952180 | orchestrator | 21:10:28.952 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-19 21:10:28.952193 | orchestrator | 21:10:28.952 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.952204 | orchestrator | 21:10:28.952 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.952250 | orchestrator | 21:10:28.952 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.952263 | orchestrator | 21:10:28.952 STDOUT terraform:  + protocol = "icmp" 2025-05-19 21:10:28.952298 | orchestrator | 21:10:28.952 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.952311 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.952343 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.952383 | orchestrator | 21:10:28.952 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.952395 | orchestrator | 21:10:28.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.952406 | orchestrator | 21:10:28.952 STDOUT terraform:  } 2025-05-19 21:10:28.952465 | orchestrator | 21:10:28.952 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-19 21:10:28.952516 | orchestrator | 21:10:28.952 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-19 21:10:28.952530 | orchestrator | 21:10:28.952 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.952541 | orchestrator | 21:10:28.952 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.952585 | orchestrator | 21:10:28.952 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.952597 | orchestrator | 21:10:28.952 STDOUT terraform:  + protocol = "tcp" 2025-05-19 21:10:28.952628 | orchestrator | 21:10:28.952 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.952640 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.952680 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.952692 | orchestrator | 21:10:28.952 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.952738 | orchestrator | 21:10:28.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.952749 | orchestrator | 21:10:28.952 STDOUT terraform:  } 2025-05-19 21:10:28.952799 | orchestrator | 21:10:28.952 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-19 21:10:28.952860 | orchestrator | 21:10:28.952 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-19 21:10:28.952873 | orchestrator | 21:10:28.952 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.952884 | orchestrator | 21:10:28.952 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.952930 | orchestrator | 21:10:28.952 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.952943 | orchestrator | 21:10:28.952 STDOUT terraform:  + protocol = "udp" 2025-05-19 21:10:28.952974 | orchestrator | 21:10:28.952 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.952986 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.953026 | orchestrator | 21:10:28.952 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.953039 | orchestrator | 21:10:28.953 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.953083 | orchestrator | 21:10:28.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.953094 | orchestrator | 21:10:28.953 STDOUT terraform:  } 2025-05-19 21:10:28.953145 | orchestrator | 21:10:28.953 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-19 21:10:28.953196 | orchestrator | 21:10:28.953 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-19 21:10:28.953209 | orchestrator | 21:10:28.953 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.953220 | orchestrator | 21:10:28.953 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.953265 | orchestrator | 21:10:28.953 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.953277 | orchestrator | 21:10:28.953 STDOUT terraform:  + protocol = "icmp" 2025-05-19 21:10:28.953296 | orchestrator | 21:10:28.953 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.953344 | orchestrator | 21:10:28.953 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.953357 | orchestrator | 21:10:28.953 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.953389 | orchestrator | 21:10:28.953 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.953401 | orchestrator | 21:10:28.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.953412 | orchestrator | 21:10:28.953 STDOUT terraform:  } 2025-05-19 21:10:28.953474 | orchestrator | 21:10:28.953 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-19 21:10:28.953525 | orchestrator | 21:10:28.953 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-19 21:10:28.953538 | orchestrator | 21:10:28.953 STDOUT terraform:  + description = "vrrp" 2025-05-19 21:10:28.953549 | orchestrator | 21:10:28.953 STDOUT terraform:  + direction = "ingress" 2025-05-19 21:10:28.953580 | orchestrator | 21:10:28.953 STDOUT terraform:  + ethertype = "IPv4" 2025-05-19 21:10:28.953592 | orchestrator | 21:10:28.953 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.953624 | orchestrator | 21:10:28.953 STDOUT terraform:  + protocol = "112" 2025-05-19 21:10:28.953635 | orchestrator | 21:10:28.953 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.953682 | orchestrator | 21:10:28.953 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-19 21:10:28.953695 | orchestrator | 21:10:28.953 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-19 21:10:28.953727 | orchestrator | 21:10:28.953 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-19 21:10:28.953768 | orchestrator | 21:10:28.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.953778 | orchestrator | 21:10:28.953 STDOUT terraform:  } 2025-05-19 21:10:28.953820 | orchestrator | 21:10:28.953 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-19 21:10:28.953901 | orchestrator | 21:10:28.953 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-19 21:10:28.953913 | orchestrator | 21:10:28.953 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.953924 | orchestrator | 21:10:28.953 STDOUT terraform:  + description = "management security group" 2025-05-19 21:10:28.953957 | orchestrator | 21:10:28.953 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.953968 | orchestrator | 21:10:28.953 STDOUT terraform:  + name = "testbed-management" 2025-05-19 21:10:28.954031 | orchestrator | 21:10:28.953 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.954046 | orchestrator | 21:10:28.953 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 21:10:28.954077 | orchestrator | 21:10:28.954 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.954086 | orchestrator | 21:10:28.954 STDOUT terraform:  } 2025-05-19 21:10:28.954137 | orchestrator | 21:10:28.954 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-19 21:10:28.954185 | orchestrator | 21:10:28.954 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-19 21:10:28.954197 | orchestrator | 21:10:28.954 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.954239 | orchestrator | 21:10:28.954 STDOUT terraform:  + description = "node security group" 2025-05-19 21:10:28.954252 | orchestrator | 21:10:28.954 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.954283 | orchestrator | 21:10:28.954 STDOUT terraform:  + name = "testbed-node" 2025-05-19 21:10:28.954294 | orchestrator | 21:10:28.954 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.954336 | orchestrator | 21:10:28.954 STDOUT terraform:  + stateful = (known after apply) 2025-05-19 21:10:28.954348 | orchestrator | 21:10:28.954 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.954358 | orchestrator | 21:10:28.954 STDOUT terraform:  } 2025-05-19 21:10:28.954423 | orchestrator | 21:10:28.954 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-19 21:10:28.954463 | orchestrator | 21:10:28.954 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-19 21:10:28.954492 | orchestrator | 21:10:28.954 STDOUT terraform:  + all_tags = (known after apply) 2025-05-19 21:10:28.954528 | orchestrator | 21:10:28.954 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-19 21:10:28.954537 | orchestrator | 21:10:28.954 STDOUT terraform:  + dns_nameservers = [ 2025-05-19 21:10:28.954548 | orchestrator | 21:10:28.954 STDOUT terraform:  + "8.8.8.8", 2025-05-19 21:10:28.954558 | orchestrator | 21:10:28.954 STDOUT terraform:  + "9.9.9.9", 2025-05-19 21:10:28.954568 | orchestrator | 21:10:28.954 STDOUT terraform:  ] 2025-05-19 21:10:28.954603 | orchestrator | 21:10:28.954 STDOUT terraform:  + enable_dhcp = true 2025-05-19 21:10:28.954613 | orchestrator | 21:10:28.954 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-19 21:10:28.954656 | orchestrator | 21:10:28.954 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.954668 | orchestrator | 21:10:28.954 STDOUT terraform:  + ip_version = 4 2025-05-19 21:10:28.954697 | orchestrator | 21:10:28.954 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-19 21:10:28.954727 | orchestrator | 21:10:28.954 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-19 21:10:28.954765 | orchestrator | 21:10:28.954 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-19 21:10:28.954796 | orchestrator | 21:10:28.954 STDOUT terraform:  + network_id = (known after apply) 2025-05-19 21:10:28.954807 | orchestrator | 21:10:28.954 STDOUT terraform:  + no_gateway = false 2025-05-19 21:10:28.954848 | orchestrator | 21:10:28.954 STDOUT terraform:  + region = (known after apply) 2025-05-19 21:10:28.954899 | orchestrator | 21:10:28.954 STDOUT terraform:  + service_types = (known after apply) 2025-05-19 21:10:28.954936 | orchestrator | 21:10:28.954 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-19 21:10:28.954955 | orchestrator | 21:10:28.954 STDOUT terraform:  + allocation_pool { 2025-05-19 21:10:28.954965 | orchestrator | 21:10:28.954 STDOUT terraform:  + end = "192.168.31.250" 2025-05-19 21:10:28.954975 | orchestrator | 21:10:28.954 STDOUT terraform:  + start = "192.168.31.200" 2025-05-19 21:10:28.954985 | orchestrator | 21:10:28.954 STDOUT terraform:  } 2025-05-19 21:10:28.954995 | orchestrator | 21:10:28.954 STDOUT terraform:  } 2025-05-19 21:10:28.955025 | orchestrator | 21:10:28.954 STDOUT terraform:  # terraform_data.image will be created 2025-05-19 21:10:28.955036 | orchestrator | 21:10:28.955 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-19 21:10:28.955072 | orchestrator | 21:10:28.955 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.955083 | orchestrator | 21:10:28.955 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 21:10:28.955111 | orchestrator | 21:10:28.955 STDOUT terraform:  + output = (known after apply) 2025-05-19 21:10:28.955122 | orchestrator | 21:10:28.955 STDOUT terraform:  } 2025-05-19 21:10:28.955151 | orchestrator | 21:10:28.955 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-19 21:10:28.955180 | orchestrator | 21:10:28.955 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-19 21:10:28.955191 | orchestrator | 21:10:28.955 STDOUT terraform:  + id = (known after apply) 2025-05-19 21:10:28.955201 | orchestrator | 21:10:28.955 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-19 21:10:28.955239 | orchestrator | 21:10:28.955 STDOUT terraform:  + output = (known after apply) 2025-05-19 21:10:28.955249 | orchestrator | 21:10:28.955 STDOUT terraform:  } 2025-05-19 21:10:28.955278 | orchestrator | 21:10:28.955 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-19 21:10:28.955287 | orchestrator | 21:10:28.955 STDOUT terraform: Changes to Outputs: 2025-05-19 21:10:28.955297 | orchestrator | 21:10:28.955 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-19 21:10:28.955335 | orchestrator | 21:10:28.955 STDOUT terraform:  + private_key = (sensitive value) 2025-05-19 21:10:29.183647 | orchestrator | 21:10:29.182 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-19 21:10:29.183894 | orchestrator | 21:10:29.183 STDOUT terraform: terraform_data.image: Creating... 2025-05-19 21:10:29.183914 | orchestrator | 21:10:29.183 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=d3c749ce-4ad4-c9e5-365a-65e726c34151] 2025-05-19 21:10:29.185196 | orchestrator | 21:10:29.184 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=fa300094-2a60-0fff-9db3-4921cc1d6c60] 2025-05-19 21:10:29.202110 | orchestrator | 21:10:29.201 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-19 21:10:29.205830 | orchestrator | 21:10:29.205 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-19 21:10:29.209530 | orchestrator | 21:10:29.209 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-19 21:10:29.211348 | orchestrator | 21:10:29.211 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-19 21:10:29.211638 | orchestrator | 21:10:29.211 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-19 21:10:29.213560 | orchestrator | 21:10:29.213 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-19 21:10:29.213999 | orchestrator | 21:10:29.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-19 21:10:29.214848 | orchestrator | 21:10:29.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-19 21:10:29.215938 | orchestrator | 21:10:29.215 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-19 21:10:29.217624 | orchestrator | 21:10:29.217 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-19 21:10:29.674607 | orchestrator | 21:10:29.674 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 21:10:29.684596 | orchestrator | 21:10:29.684 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-19 21:10:29.732452 | orchestrator | 21:10:29.732 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-05-19 21:10:29.741712 | orchestrator | 21:10:29.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-19 21:10:35.203416 | orchestrator | 21:10:35.203 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=e4f81c1c-b64d-4bd4-9b85-3483739f196d] 2025-05-19 21:10:35.209589 | orchestrator | 21:10:35.209 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-19 21:10:35.269091 | orchestrator | 21:10:35.268 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-19 21:10:35.279080 | orchestrator | 21:10:35.278 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-19 21:10:39.208370 | orchestrator | 21:10:39.208 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-19 21:10:39.213539 | orchestrator | 21:10:39.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-19 21:10:39.213643 | orchestrator | 21:10:39.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-19 21:10:39.213787 | orchestrator | 21:10:39.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-19 21:10:39.215919 | orchestrator | 21:10:39.215 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-19 21:10:39.217070 | orchestrator | 21:10:39.216 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-19 21:10:39.218286 | orchestrator | 21:10:39.218 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-19 21:10:39.685279 | orchestrator | 21:10:39.684 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-19 21:10:39.742625 | orchestrator | 21:10:39.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-19 21:10:39.785785 | orchestrator | 21:10:39.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=934db128-59d0-4992-8eb9-92fedfad2305] 2025-05-19 21:10:39.793517 | orchestrator | 21:10:39.793 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-19 21:10:39.812654 | orchestrator | 21:10:39.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=65b1a457-74f9-440b-9c0b-913fdfb04314] 2025-05-19 21:10:39.819614 | orchestrator | 21:10:39.819 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-19 21:10:39.823717 | orchestrator | 21:10:39.823 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=53ed34a9-290d-4031-aa3e-f95b5c6d33b8] 2025-05-19 21:10:39.831124 | orchestrator | 21:10:39.830 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-19 21:10:39.836236 | orchestrator | 21:10:39.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=d1012b89-dbd1-43a9-85f9-d367e08581b3] 2025-05-19 21:10:39.845142 | orchestrator | 21:10:39.844 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-19 21:10:39.847389 | orchestrator | 21:10:39.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=1c1b0e05-b224-4a51-87f1-7edfa2f843ba] 2025-05-19 21:10:39.853866 | orchestrator | 21:10:39.853 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-19 21:10:39.858437 | orchestrator | 21:10:39.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=497cbfa2-65b5-4f15-af98-7aa46abcc2e6] 2025-05-19 21:10:39.867580 | orchestrator | 21:10:39.867 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=cd626c85-4d79-4ec3-873e-c38f80c6408d] 2025-05-19 21:10:39.873220 | orchestrator | 21:10:39.873 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-19 21:10:39.878506 | orchestrator | 21:10:39.878 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-19 21:10:39.881504 | orchestrator | 21:10:39.880 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=3434748c9eaa774926299d9702dc7c34b0ca1d33] 2025-05-19 21:10:39.883375 | orchestrator | 21:10:39.883 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=b5d7ebc4c1022fcc5895164bc23d5f5aaf4a759b] 2025-05-19 21:10:39.887464 | orchestrator | 21:10:39.887 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-19 21:10:39.888012 | orchestrator | 21:10:39.887 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-19 21:10:39.906228 | orchestrator | 21:10:39.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=5aea9423-7155-4edc-a2c1-cc12eb50d261] 2025-05-19 21:10:39.916391 | orchestrator | 21:10:39.916 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=fb54ccde-5cdf-4bdf-8e5b-bd2626265c70] 2025-05-19 21:10:45.280443 | orchestrator | 21:10:45.280 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-19 21:10:45.590348 | orchestrator | 21:10:45.589 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=5bf1eb9f-1044-4b2d-b10a-96a4760b0d61] 2025-05-19 21:10:45.671109 | orchestrator | 21:10:45.670 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=6ac36b29-4fca-49b1-af98-c1c7f9d8ab31] 2025-05-19 21:10:45.679909 | orchestrator | 21:10:45.679 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-19 21:10:49.794504 | orchestrator | 21:10:49.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-19 21:10:49.820979 | orchestrator | 21:10:49.820 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-19 21:10:49.832169 | orchestrator | 21:10:49.831 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-19 21:10:49.846450 | orchestrator | 21:10:49.846 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 21:10:49.854692 | orchestrator | 21:10:49.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-19 21:10:49.889560 | orchestrator | 21:10:49.889 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-19 21:10:50.149142 | orchestrator | 21:10:50.148 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=f8343413-00d0-459f-8d8a-4508348eb38f] 2025-05-19 21:10:50.168833 | orchestrator | 21:10:50.168 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=43b96880-e893-431a-9e82-fe3cb3c87177] 2025-05-19 21:10:50.237707 | orchestrator | 21:10:50.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=9de35ea6-b803-49bc-8b65-554f85c20f06] 2025-05-19 21:10:50.245493 | orchestrator | 21:10:50.245 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=67cb46d1-5531-46d1-bade-650e39df9630] 2025-05-19 21:10:50.252448 | orchestrator | 21:10:50.252 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=36ab505a-3b56-4dbd-abfd-0a775538d54b] 2025-05-19 21:10:50.265400 | orchestrator | 21:10:50.265 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=377dee36-64db-427b-88c3-b195b97ec397] 2025-05-19 21:10:53.510760 | orchestrator | 21:10:53.510 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=627f7c06-7f46-4125-9a9a-b284f3ab6c46] 2025-05-19 21:10:53.517813 | orchestrator | 21:10:53.517 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-19 21:10:53.518608 | orchestrator | 21:10:53.518 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-19 21:10:53.518825 | orchestrator | 21:10:53.518 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-19 21:10:53.716374 | orchestrator | 21:10:53.715 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9ec84187-b5e9-444e-8e8c-277b48adf993] 2025-05-19 21:10:53.739700 | orchestrator | 21:10:53.739 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-19 21:10:53.740097 | orchestrator | 21:10:53.739 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-19 21:10:53.742530 | orchestrator | 21:10:53.742 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-19 21:10:53.743991 | orchestrator | 21:10:53.743 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=20c08f7b-3ecf-423c-b9c8-8df84c4bd019] 2025-05-19 21:10:53.744777 | orchestrator | 21:10:53.744 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-19 21:10:53.745807 | orchestrator | 21:10:53.745 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-19 21:10:53.745935 | orchestrator | 21:10:53.745 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-19 21:10:53.748050 | orchestrator | 21:10:53.747 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-19 21:10:53.748263 | orchestrator | 21:10:53.748 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-19 21:10:53.758249 | orchestrator | 21:10:53.758 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-19 21:10:53.941427 | orchestrator | 21:10:53.940 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=25d745c2-945d-404b-80a2-0ecd17c55193] 2025-05-19 21:10:53.948591 | orchestrator | 21:10:53.948 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-19 21:10:54.137321 | orchestrator | 21:10:54.136 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=3b59f8ef-da04-492d-bf0c-647d78fdf048] 2025-05-19 21:10:54.147025 | orchestrator | 21:10:54.146 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-19 21:10:54.337944 | orchestrator | 21:10:54.337 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=5bb2cd8c-789f-406a-9070-41b5e0c8c01c] 2025-05-19 21:10:54.351417 | orchestrator | 21:10:54.351 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-19 21:10:54.370164 | orchestrator | 21:10:54.369 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a929ddc4-3249-4af5-8329-dc0f41194cee] 2025-05-19 21:10:54.375142 | orchestrator | 21:10:54.374 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-19 21:10:54.552187 | orchestrator | 21:10:54.551 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=4921cf73-16e2-4e6e-869c-a697f3fd4d87] 2025-05-19 21:10:54.560045 | orchestrator | 21:10:54.559 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-19 21:10:54.579209 | orchestrator | 21:10:54.578 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=dae91233-b7e2-4adc-ad52-2dc3fc7a8f6e] 2025-05-19 21:10:54.586510 | orchestrator | 21:10:54.586 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-19 21:10:54.725437 | orchestrator | 21:10:54.725 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3044e8d5-ab83-4567-972a-13a6ce490169] 2025-05-19 21:10:54.730320 | orchestrator | 21:10:54.729 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-19 21:10:54.879967 | orchestrator | 21:10:54.879 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=454d6d7b-1aa8-4d14-b3ca-7b4d39d821ca] 2025-05-19 21:10:55.021770 | orchestrator | 21:10:55.021 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=bb97cf65-e201-4baa-9d6d-d7b072a2d60d] 2025-05-19 21:10:59.350463 | orchestrator | 21:10:59.350 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=ec07ad83-8ab4-4e48-8c99-653e5d074afe] 2025-05-19 21:10:59.416029 | orchestrator | 21:10:59.415 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=5378cac9-ccc3-4d6d-8a31-ef68d9a1e536] 2025-05-19 21:10:59.422255 | orchestrator | 21:10:59.421 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=bd7f78b5-79a8-4347-a330-b4d1698a4ffd] 2025-05-19 21:10:59.428022 | orchestrator | 21:10:59.427 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=30b9cba6-e54c-4a58-a10f-a2e4a3ef2c2a] 2025-05-19 21:10:59.444493 | orchestrator | 21:10:59.444 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=e4802e74-d3ca-49ff-8a28-43541eaadacc] 2025-05-19 21:10:59.463655 | orchestrator | 21:10:59.463 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=eaea3d2d-5c8c-4613-a0bf-13bc6da2b6ee] 2025-05-19 21:10:59.828571 | orchestrator | 21:10:59.828 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=af5bdf0a-0206-4cff-924a-6c0bbfd01546] 2025-05-19 21:11:00.789651 | orchestrator | 21:11:00.789 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=6ee1228d-4384-46e2-a155-04b1b21374ce] 2025-05-19 21:11:00.810098 | orchestrator | 21:11:00.809 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-19 21:11:00.817576 | orchestrator | 21:11:00.817 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-19 21:11:00.827408 | orchestrator | 21:11:00.827 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-19 21:11:00.828609 | orchestrator | 21:11:00.828 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-19 21:11:00.833802 | orchestrator | 21:11:00.833 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-19 21:11:00.836130 | orchestrator | 21:11:00.836 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-19 21:11:00.836264 | orchestrator | 21:11:00.836 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-19 21:11:07.638793 | orchestrator | 21:11:07.638 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=74a57ea8-d00f-4e91-ae7f-c93e6a28e4a4] 2025-05-19 21:11:07.652076 | orchestrator | 21:11:07.651 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-19 21:11:07.653431 | orchestrator | 21:11:07.653 STDOUT terraform: local_file.inventory: Creating... 2025-05-19 21:11:07.654315 | orchestrator | 21:11:07.654 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-19 21:11:07.662508 | orchestrator | 21:11:07.662 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=08bc26dba807aba15ac4f0dbd76efe198003b225] 2025-05-19 21:11:07.664222 | orchestrator | 21:11:07.664 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d0cc1e57ab72108eb5b807c71b6ecb01b0c4bb26] 2025-05-19 21:11:08.323940 | orchestrator | 21:11:08.323 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=74a57ea8-d00f-4e91-ae7f-c93e6a28e4a4] 2025-05-19 21:11:10.818931 | orchestrator | 21:11:10.818 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-19 21:11:10.828158 | orchestrator | 21:11:10.827 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-19 21:11:10.835316 | orchestrator | 21:11:10.835 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-19 21:11:10.835543 | orchestrator | 21:11:10.835 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-19 21:11:10.838655 | orchestrator | 21:11:10.838 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-19 21:11:10.842848 | orchestrator | 21:11:10.842 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-19 21:11:20.821764 | orchestrator | 21:11:20.821 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-19 21:11:20.829006 | orchestrator | 21:11:20.828 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-19 21:11:20.836353 | orchestrator | 21:11:20.835 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-19 21:11:20.836455 | orchestrator | 21:11:20.836 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-19 21:11:20.839578 | orchestrator | 21:11:20.839 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-19 21:11:20.843762 | orchestrator | 21:11:20.843 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-19 21:11:21.256707 | orchestrator | 21:11:21.256 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=7260fc66-e424-4052-aeee-97d12ce4df5c] 2025-05-19 21:11:21.342827 | orchestrator | 21:11:21.342 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=93f1aca6-71d6-4199-bb14-561849cd5273] 2025-05-19 21:11:21.358644 | orchestrator | 21:11:21.358 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=b0b82add-6f46-45b8-b439-d78723b967ca] 2025-05-19 21:11:21.436983 | orchestrator | 21:11:21.436 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=05ace83b-2530-444c-b27c-7c4e458c6f95] 2025-05-19 21:11:30.822272 | orchestrator | 21:11:30.821 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-19 21:11:30.836585 | orchestrator | 21:11:30.836 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-19 21:11:31.729645 | orchestrator | 21:11:31.729 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=3ab5e93f-5d3f-4794-834f-8e3e2f1f9035] 2025-05-19 21:11:31.915655 | orchestrator | 21:11:31.915 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=707134b6-06a8-42e9-8ee1-c1de443ed186] 2025-05-19 21:11:31.926487 | orchestrator | 21:11:31.926 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-19 21:11:31.944598 | orchestrator | 21:11:31.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-19 21:11:31.944826 | orchestrator | 21:11:31.944 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-19 21:11:31.945430 | orchestrator | 21:11:31.945 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-19 21:11:31.947440 | orchestrator | 21:11:31.947 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-19 21:11:31.954319 | orchestrator | 21:11:31.954 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3052133214044616958] 2025-05-19 21:11:31.955176 | orchestrator | 21:11:31.955 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-19 21:11:31.955562 | orchestrator | 21:11:31.955 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-19 21:11:31.967660 | orchestrator | 21:11:31.967 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-19 21:11:31.971687 | orchestrator | 21:11:31.971 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-19 21:11:31.989480 | orchestrator | 21:11:31.989 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-19 21:11:31.991445 | orchestrator | 21:11:31.991 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-19 21:11:37.281138 | orchestrator | 21:11:37.280 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=93f1aca6-71d6-4199-bb14-561849cd5273/5aea9423-7155-4edc-a2c1-cc12eb50d261] 2025-05-19 21:11:37.285691 | orchestrator | 21:11:37.285 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=707134b6-06a8-42e9-8ee1-c1de443ed186/d1012b89-dbd1-43a9-85f9-d367e08581b3] 2025-05-19 21:11:37.302848 | orchestrator | 21:11:37.302 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=7260fc66-e424-4052-aeee-97d12ce4df5c/1c1b0e05-b224-4a51-87f1-7edfa2f843ba] 2025-05-19 21:11:37.323985 | orchestrator | 21:11:37.323 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=707134b6-06a8-42e9-8ee1-c1de443ed186/934db128-59d0-4992-8eb9-92fedfad2305] 2025-05-19 21:11:37.337698 | orchestrator | 21:11:37.337 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=93f1aca6-71d6-4199-bb14-561849cd5273/cd626c85-4d79-4ec3-873e-c38f80c6408d] 2025-05-19 21:11:37.356331 | orchestrator | 21:11:37.355 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=93f1aca6-71d6-4199-bb14-561849cd5273/65b1a457-74f9-440b-9c0b-913fdfb04314] 2025-05-19 21:11:37.361775 | orchestrator | 21:11:37.361 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=7260fc66-e424-4052-aeee-97d12ce4df5c/497cbfa2-65b5-4f15-af98-7aa46abcc2e6] 2025-05-19 21:11:37.362756 | orchestrator | 21:11:37.362 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=707134b6-06a8-42e9-8ee1-c1de443ed186/53ed34a9-290d-4031-aa3e-f95b5c6d33b8] 2025-05-19 21:11:37.385249 | orchestrator | 21:11:37.384 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=7260fc66-e424-4052-aeee-97d12ce4df5c/fb54ccde-5cdf-4bdf-8e5b-bd2626265c70] 2025-05-19 21:11:41.994722 | orchestrator | 21:11:41.994 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-19 21:11:51.999509 | orchestrator | 21:11:51.999 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-19 21:11:52.604281 | orchestrator | 21:11:52.603 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=b715c581-62c3-4189-9496-81291b1a8377] 2025-05-19 21:11:52.633966 | orchestrator | 21:11:52.633 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-19 21:11:52.634074 | orchestrator | 21:11:52.633 STDOUT terraform: Outputs: 2025-05-19 21:11:52.634154 | orchestrator | 21:11:52.634 STDOUT terraform: manager_address = 2025-05-19 21:11:52.634214 | orchestrator | 21:11:52.634 STDOUT terraform: private_key = 2025-05-19 21:11:52.764723 | orchestrator | ok: Runtime: 0:01:35.010703 2025-05-19 21:11:52.799826 | 2025-05-19 21:11:52.799962 | TASK [Create infrastructure (stable)] 2025-05-19 21:11:53.331808 | orchestrator | skipping: Conditional result was False 2025-05-19 21:11:53.341841 | 2025-05-19 21:11:53.342189 | TASK [Fetch manager address] 2025-05-19 21:11:53.801189 | orchestrator | ok 2025-05-19 21:11:53.820281 | 2025-05-19 21:11:53.820514 | TASK [Set manager_host address] 2025-05-19 21:11:53.886332 | orchestrator | ok 2025-05-19 21:11:53.893426 | 2025-05-19 21:11:53.893550 | LOOP [Update ansible collections] 2025-05-19 21:11:54.808914 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 21:11:54.809313 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 21:11:54.809373 | orchestrator | Starting galaxy collection install process 2025-05-19 21:11:54.809413 | orchestrator | Process install dependency map 2025-05-19 21:11:54.809448 | orchestrator | Starting collection install process 2025-05-19 21:11:54.809496 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-05-19 21:11:54.809537 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-05-19 21:11:54.809578 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-19 21:11:54.809648 | orchestrator | ok: Item: commons Runtime: 0:00:00.577734 2025-05-19 21:11:55.684216 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 21:11:55.684474 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-19 21:11:55.684537 | orchestrator | Starting galaxy collection install process 2025-05-19 21:11:55.684580 | orchestrator | Process install dependency map 2025-05-19 21:11:55.684619 | orchestrator | Starting collection install process 2025-05-19 21:11:55.684674 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-05-19 21:11:55.684714 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-05-19 21:11:55.684749 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-19 21:11:55.684803 | orchestrator | ok: Item: services Runtime: 0:00:00.613785 2025-05-19 21:11:55.707099 | 2025-05-19 21:11:55.707244 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 21:12:06.262018 | orchestrator | ok 2025-05-19 21:12:06.271763 | 2025-05-19 21:12:06.271924 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 21:13:06.311896 | orchestrator | ok 2025-05-19 21:13:06.319349 | 2025-05-19 21:13:06.319481 | TASK [Fetch manager ssh hostkey] 2025-05-19 21:13:07.897130 | orchestrator | Output suppressed because no_log was given 2025-05-19 21:13:07.913790 | 2025-05-19 21:13:07.913987 | TASK [Get ssh keypair from terraform environment] 2025-05-19 21:13:08.451475 | orchestrator | ok: Runtime: 0:00:00.011819 2025-05-19 21:13:08.468268 | 2025-05-19 21:13:08.468473 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 21:13:08.512209 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-19 21:13:08.521252 | 2025-05-19 21:13:08.521432 | TASK [Run manager part 0] 2025-05-19 21:13:09.469854 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 21:13:09.512916 | orchestrator | 2025-05-19 21:13:09.512960 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-19 21:13:09.512968 | orchestrator | 2025-05-19 21:13:09.512981 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-19 21:13:11.018447 | orchestrator | ok: [testbed-manager] 2025-05-19 21:13:11.018889 | orchestrator | 2025-05-19 21:13:11.018916 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 21:13:11.018925 | orchestrator | 2025-05-19 21:13:11.018934 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:13:12.826823 | orchestrator | ok: [testbed-manager] 2025-05-19 21:13:12.826870 | orchestrator | 2025-05-19 21:13:12.826877 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 21:13:13.468964 | orchestrator | ok: [testbed-manager] 2025-05-19 21:13:13.469075 | orchestrator | 2025-05-19 21:13:13.469090 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 21:13:13.520956 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.521015 | orchestrator | 2025-05-19 21:13:13.521029 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-19 21:13:13.554176 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.554225 | orchestrator | 2025-05-19 21:13:13.554233 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 21:13:13.585196 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.585254 | orchestrator | 2025-05-19 21:13:13.585263 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 21:13:13.612505 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.612548 | orchestrator | 2025-05-19 21:13:13.612554 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 21:13:13.638746 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.638789 | orchestrator | 2025-05-19 21:13:13.638795 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-19 21:13:13.670507 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.670552 | orchestrator | 2025-05-19 21:13:13.670561 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-19 21:13:13.698028 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:13:13.698086 | orchestrator | 2025-05-19 21:13:13.698104 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-19 21:13:14.424496 | orchestrator | changed: [testbed-manager] 2025-05-19 21:13:14.424573 | orchestrator | 2025-05-19 21:13:14.424586 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-19 21:16:00.906916 | orchestrator | changed: [testbed-manager] 2025-05-19 21:16:00.906981 | orchestrator | 2025-05-19 21:16:00.906994 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 21:17:25.984897 | orchestrator | changed: [testbed-manager] 2025-05-19 21:17:25.985002 | orchestrator | 2025-05-19 21:17:25.985018 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-19 21:17:45.667659 | orchestrator | changed: [testbed-manager] 2025-05-19 21:17:45.667774 | orchestrator | 2025-05-19 21:17:45.667800 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-19 21:17:54.086519 | orchestrator | changed: [testbed-manager] 2025-05-19 21:17:54.086556 | orchestrator | 2025-05-19 21:17:54.086582 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 21:17:54.122806 | orchestrator | ok: [testbed-manager] 2025-05-19 21:17:54.122848 | orchestrator | 2025-05-19 21:17:54.122856 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-19 21:17:54.906387 | orchestrator | ok: [testbed-manager] 2025-05-19 21:17:54.906502 | orchestrator | 2025-05-19 21:17:54.906520 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-19 21:17:55.624172 | orchestrator | changed: [testbed-manager] 2025-05-19 21:17:55.624288 | orchestrator | 2025-05-19 21:17:55.624305 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-19 21:18:01.622908 | orchestrator | changed: [testbed-manager] 2025-05-19 21:18:01.622997 | orchestrator | 2025-05-19 21:18:01.623036 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-19 21:18:07.173419 | orchestrator | changed: [testbed-manager] 2025-05-19 21:18:07.173530 | orchestrator | 2025-05-19 21:18:07.173551 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-19 21:18:09.707938 | orchestrator | changed: [testbed-manager] 2025-05-19 21:18:09.708039 | orchestrator | 2025-05-19 21:18:09.708057 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-19 21:18:11.423306 | orchestrator | changed: [testbed-manager] 2025-05-19 21:18:11.423395 | orchestrator | 2025-05-19 21:18:11.423411 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-19 21:18:12.526979 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 21:18:12.527744 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 21:18:12.527767 | orchestrator | 2025-05-19 21:18:12.527783 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-19 21:18:12.570464 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 21:18:12.570572 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 21:18:12.570630 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 21:18:12.570652 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 21:18:15.970987 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-19 21:18:15.971084 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-19 21:18:15.971100 | orchestrator | 2025-05-19 21:18:15.971114 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-19 21:18:16.536664 | orchestrator | changed: [testbed-manager] 2025-05-19 21:18:16.536754 | orchestrator | 2025-05-19 21:18:16.536770 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-19 21:19:34.651845 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-19 21:19:34.651979 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-19 21:19:34.652070 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-19 21:19:34.652138 | orchestrator | 2025-05-19 21:19:34.652162 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-19 21:19:36.999607 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-19 21:19:36.999736 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-19 21:19:36.999753 | orchestrator | 2025-05-19 21:19:36.999766 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-19 21:19:36.999779 | orchestrator | 2025-05-19 21:19:36.999790 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:19:38.415841 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:38.415934 | orchestrator | 2025-05-19 21:19:38.415957 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 21:19:38.453950 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:38.454094 | orchestrator | 2025-05-19 21:19:38.454125 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 21:19:38.516663 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:38.516788 | orchestrator | 2025-05-19 21:19:38.516805 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 21:19:39.268099 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:39.268179 | orchestrator | 2025-05-19 21:19:39.268192 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 21:19:40.004878 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:40.004978 | orchestrator | 2025-05-19 21:19:40.004995 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 21:19:41.439052 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-19 21:19:41.439092 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-19 21:19:41.439098 | orchestrator | 2025-05-19 21:19:41.439111 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 21:19:42.920501 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:42.920637 | orchestrator | 2025-05-19 21:19:42.920664 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 21:19:44.716295 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:19:44.716394 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-19 21:19:44.716418 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:19:44.716433 | orchestrator | 2025-05-19 21:19:44.716446 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 21:19:45.300656 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:45.300839 | orchestrator | 2025-05-19 21:19:45.300868 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 21:19:45.368002 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:45.368086 | orchestrator | 2025-05-19 21:19:45.368100 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 21:19:46.229885 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:19:46.229925 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:46.229933 | orchestrator | 2025-05-19 21:19:46.229939 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 21:19:46.277223 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:46.277285 | orchestrator | 2025-05-19 21:19:46.277299 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 21:19:46.314831 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:46.314871 | orchestrator | 2025-05-19 21:19:46.314878 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 21:19:46.344602 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:46.344721 | orchestrator | 2025-05-19 21:19:46.344740 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 21:19:46.396268 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:46.396327 | orchestrator | 2025-05-19 21:19:46.396333 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 21:19:47.115351 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:47.115441 | orchestrator | 2025-05-19 21:19:47.115454 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-19 21:19:47.115465 | orchestrator | 2025-05-19 21:19:47.115476 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:19:48.439607 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:48.439642 | orchestrator | 2025-05-19 21:19:48.439648 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-19 21:19:49.414556 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:49.414666 | orchestrator | 2025-05-19 21:19:49.414677 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:19:49.414688 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-19 21:19:49.414718 | orchestrator | 2025-05-19 21:19:49.805252 | orchestrator | ok: Runtime: 0:06:40.716010 2025-05-19 21:19:49.821805 | 2025-05-19 21:19:49.821929 | TASK [Point out that the log in on the manager is now possible] 2025-05-19 21:19:49.868704 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-19 21:19:49.878064 | 2025-05-19 21:19:49.878179 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-19 21:19:49.923908 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-19 21:19:49.932382 | 2025-05-19 21:19:49.932492 | TASK [Run manager part 1 + 2] 2025-05-19 21:19:50.789777 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-19 21:19:50.845801 | orchestrator | 2025-05-19 21:19:50.845921 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-19 21:19:50.845941 | orchestrator | 2025-05-19 21:19:50.845974 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:19:53.821269 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:53.821361 | orchestrator | 2025-05-19 21:19:53.821383 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-19 21:19:53.865385 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:53.865454 | orchestrator | 2025-05-19 21:19:53.865466 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-19 21:19:53.912664 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:53.912759 | orchestrator | 2025-05-19 21:19:53.912773 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 21:19:53.964823 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:53.964889 | orchestrator | 2025-05-19 21:19:53.964901 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 21:19:54.037575 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:54.037657 | orchestrator | 2025-05-19 21:19:54.037667 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 21:19:54.100396 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:54.100474 | orchestrator | 2025-05-19 21:19:54.100485 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 21:19:54.146882 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-19 21:19:54.146956 | orchestrator | 2025-05-19 21:19:54.146963 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 21:19:54.878693 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:54.878780 | orchestrator | 2025-05-19 21:19:54.878789 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 21:19:54.933835 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:19:54.933926 | orchestrator | 2025-05-19 21:19:54.933936 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 21:19:56.399695 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:56.399790 | orchestrator | 2025-05-19 21:19:56.399799 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 21:19:56.993677 | orchestrator | ok: [testbed-manager] 2025-05-19 21:19:56.994987 | orchestrator | 2025-05-19 21:19:56.995000 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 21:19:58.135485 | orchestrator | changed: [testbed-manager] 2025-05-19 21:19:58.136758 | orchestrator | 2025-05-19 21:19:58.136771 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 21:20:10.154962 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:10.155069 | orchestrator | 2025-05-19 21:20:10.155087 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-19 21:20:10.811454 | orchestrator | ok: [testbed-manager] 2025-05-19 21:20:10.811541 | orchestrator | 2025-05-19 21:20:10.811569 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-19 21:20:10.866661 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:20:10.866778 | orchestrator | 2025-05-19 21:20:10.866805 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-19 21:20:11.763095 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:11.763177 | orchestrator | 2025-05-19 21:20:11.763193 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-19 21:20:12.722795 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:12.722884 | orchestrator | 2025-05-19 21:20:12.722899 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-19 21:20:13.288156 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:13.288244 | orchestrator | 2025-05-19 21:20:13.288259 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-19 21:20:13.330256 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-19 21:20:13.330323 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-19 21:20:13.330329 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-19 21:20:13.330334 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-19 21:20:15.495539 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:15.495641 | orchestrator | 2025-05-19 21:20:15.495658 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-19 21:20:24.215515 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-19 21:20:24.215720 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-19 21:20:24.215791 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-19 21:20:24.215813 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-19 21:20:24.215846 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-19 21:20:24.215864 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-19 21:20:24.215881 | orchestrator | 2025-05-19 21:20:24.215899 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-19 21:20:25.256046 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:25.256145 | orchestrator | 2025-05-19 21:20:25.256161 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-19 21:20:25.304499 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:20:25.304577 | orchestrator | 2025-05-19 21:20:25.304592 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-19 21:20:28.371535 | orchestrator | changed: [testbed-manager] 2025-05-19 21:20:28.371649 | orchestrator | 2025-05-19 21:20:28.371668 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-19 21:20:28.415626 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:20:28.415714 | orchestrator | 2025-05-19 21:20:28.415732 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-19 21:21:58.393754 | orchestrator | changed: [testbed-manager] 2025-05-19 21:21:58.393910 | orchestrator | 2025-05-19 21:21:58.393931 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 21:21:59.486878 | orchestrator | ok: [testbed-manager] 2025-05-19 21:21:59.486928 | orchestrator | 2025-05-19 21:21:59.486935 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:21:59.486943 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-19 21:21:59.486948 | orchestrator | 2025-05-19 21:22:00.076087 | orchestrator | ok: Runtime: 0:02:09.369226 2025-05-19 21:22:00.094754 | 2025-05-19 21:22:00.094933 | TASK [Reboot manager] 2025-05-19 21:22:01.634016 | orchestrator | ok: Runtime: 0:00:00.958288 2025-05-19 21:22:01.651531 | 2025-05-19 21:22:01.651682 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-19 21:22:16.508703 | orchestrator | ok 2025-05-19 21:22:16.519646 | 2025-05-19 21:22:16.519805 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-19 21:23:16.552329 | orchestrator | ok 2025-05-19 21:23:16.559661 | 2025-05-19 21:23:16.559788 | TASK [Deploy manager + bootstrap nodes] 2025-05-19 21:23:18.945935 | orchestrator | 2025-05-19 21:23:18.946208 | orchestrator | # DEPLOY MANAGER 2025-05-19 21:23:18.946234 | orchestrator | 2025-05-19 21:23:18.946248 | orchestrator | + set -e 2025-05-19 21:23:18.946262 | orchestrator | + echo 2025-05-19 21:23:18.946277 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-19 21:23:18.946294 | orchestrator | + echo 2025-05-19 21:23:18.946348 | orchestrator | + cat /opt/manager-vars.sh 2025-05-19 21:23:18.949646 | orchestrator | export NUMBER_OF_NODES=6 2025-05-19 21:23:18.949702 | orchestrator | 2025-05-19 21:23:18.949723 | orchestrator | export CEPH_VERSION=reef 2025-05-19 21:23:18.949744 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-19 21:23:18.949765 | orchestrator | export MANAGER_VERSION=latest 2025-05-19 21:23:18.949844 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-19 21:23:18.949856 | orchestrator | 2025-05-19 21:23:18.949875 | orchestrator | export ARA=false 2025-05-19 21:23:18.949887 | orchestrator | export TEMPEST=false 2025-05-19 21:23:18.949987 | orchestrator | export IS_ZUUL=true 2025-05-19 21:23:18.950001 | orchestrator | 2025-05-19 21:23:18.950060 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:23:18.950076 | orchestrator | export EXTERNAL_API=false 2025-05-19 21:23:18.950087 | orchestrator | 2025-05-19 21:23:18.950110 | orchestrator | export IMAGE_USER=ubuntu 2025-05-19 21:23:18.950121 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-19 21:23:18.950133 | orchestrator | 2025-05-19 21:23:18.950147 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-19 21:23:18.950170 | orchestrator | 2025-05-19 21:23:18.950185 | orchestrator | + echo 2025-05-19 21:23:18.950205 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 21:23:18.950936 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 21:23:18.950983 | orchestrator | ++ INTERACTIVE=false 2025-05-19 21:23:18.951103 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 21:23:18.951140 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 21:23:18.951154 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 21:23:18.951166 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 21:23:18.951176 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 21:23:18.951192 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 21:23:18.951204 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 21:23:18.951215 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 21:23:18.951252 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 21:23:18.951274 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 21:23:18.951285 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 21:23:18.951296 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 21:23:18.951307 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 21:23:18.951318 | orchestrator | ++ export ARA=false 2025-05-19 21:23:18.951339 | orchestrator | ++ ARA=false 2025-05-19 21:23:18.951351 | orchestrator | ++ export TEMPEST=false 2025-05-19 21:23:18.951364 | orchestrator | ++ TEMPEST=false 2025-05-19 21:23:18.951382 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 21:23:18.951401 | orchestrator | ++ IS_ZUUL=true 2025-05-19 21:23:18.951420 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:23:18.951438 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:23:18.951462 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 21:23:18.951481 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 21:23:18.951498 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 21:23:18.951516 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 21:23:18.951534 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 21:23:18.951551 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 21:23:18.951571 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 21:23:18.951589 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 21:23:18.951607 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-19 21:23:19.011059 | orchestrator | + docker version 2025-05-19 21:23:19.257787 | orchestrator | Client: Docker Engine - Community 2025-05-19 21:23:19.257935 | orchestrator | Version: 27.5.1 2025-05-19 21:23:19.257993 | orchestrator | API version: 1.47 2025-05-19 21:23:19.258006 | orchestrator | Go version: go1.22.11 2025-05-19 21:23:19.258065 | orchestrator | Git commit: 9f9e405 2025-05-19 21:23:19.258080 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-19 21:23:19.258092 | orchestrator | OS/Arch: linux/amd64 2025-05-19 21:23:19.258103 | orchestrator | Context: default 2025-05-19 21:23:19.258115 | orchestrator | 2025-05-19 21:23:19.258126 | orchestrator | Server: Docker Engine - Community 2025-05-19 21:23:19.258137 | orchestrator | Engine: 2025-05-19 21:23:19.258149 | orchestrator | Version: 27.5.1 2025-05-19 21:23:19.258160 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-19 21:23:19.258171 | orchestrator | Go version: go1.22.11 2025-05-19 21:23:19.258182 | orchestrator | Git commit: 4c9b3b0 2025-05-19 21:23:19.258223 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-19 21:23:19.258235 | orchestrator | OS/Arch: linux/amd64 2025-05-19 21:23:19.258245 | orchestrator | Experimental: false 2025-05-19 21:23:19.258257 | orchestrator | containerd: 2025-05-19 21:23:19.258268 | orchestrator | Version: 1.7.27 2025-05-19 21:23:19.258279 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-19 21:23:19.258290 | orchestrator | runc: 2025-05-19 21:23:19.258316 | orchestrator | Version: 1.2.5 2025-05-19 21:23:19.258328 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-19 21:23:19.258339 | orchestrator | docker-init: 2025-05-19 21:23:19.258350 | orchestrator | Version: 0.19.0 2025-05-19 21:23:19.258361 | orchestrator | GitCommit: de40ad0 2025-05-19 21:23:19.261539 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-19 21:23:19.270828 | orchestrator | + set -e 2025-05-19 21:23:19.270879 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 21:23:19.270891 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 21:23:19.270902 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 21:23:19.270913 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 21:23:19.270924 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 21:23:19.270937 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 21:23:19.270975 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 21:23:19.270986 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 21:23:19.270998 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 21:23:19.271009 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 21:23:19.271020 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 21:23:19.271031 | orchestrator | ++ export ARA=false 2025-05-19 21:23:19.271042 | orchestrator | ++ ARA=false 2025-05-19 21:23:19.271078 | orchestrator | ++ export TEMPEST=false 2025-05-19 21:23:19.271090 | orchestrator | ++ TEMPEST=false 2025-05-19 21:23:19.271100 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 21:23:19.271111 | orchestrator | ++ IS_ZUUL=true 2025-05-19 21:23:19.271122 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:23:19.271133 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:23:19.271152 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 21:23:19.271163 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 21:23:19.271174 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 21:23:19.271184 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 21:23:19.271195 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 21:23:19.271206 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 21:23:19.271244 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 21:23:19.271256 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 21:23:19.271266 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 21:23:19.271277 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 21:23:19.271288 | orchestrator | ++ INTERACTIVE=false 2025-05-19 21:23:19.271299 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 21:23:19.271309 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 21:23:19.271324 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 21:23:19.271335 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 21:23:19.271346 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-19 21:23:19.279260 | orchestrator | + set -e 2025-05-19 21:23:19.279298 | orchestrator | + VERSION=reef 2025-05-19 21:23:19.279618 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-19 21:23:19.285767 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-19 21:23:19.285887 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-19 21:23:19.291937 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-19 21:23:19.298379 | orchestrator | + set -e 2025-05-19 21:23:19.298919 | orchestrator | + VERSION=2024.2 2025-05-19 21:23:19.299553 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-19 21:23:19.303706 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-19 21:23:19.303751 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-19 21:23:19.309173 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-19 21:23:19.310004 | orchestrator | ++ semver latest 7.0.0 2025-05-19 21:23:19.371785 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 21:23:19.371894 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 21:23:19.371911 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-19 21:23:19.371924 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-19 21:23:19.413283 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 21:23:19.414376 | orchestrator | + source /opt/venv/bin/activate 2025-05-19 21:23:19.415885 | orchestrator | ++ deactivate nondestructive 2025-05-19 21:23:19.416090 | orchestrator | ++ '[' -n '' ']' 2025-05-19 21:23:19.416110 | orchestrator | ++ '[' -n '' ']' 2025-05-19 21:23:19.416123 | orchestrator | ++ hash -r 2025-05-19 21:23:19.416167 | orchestrator | ++ '[' -n '' ']' 2025-05-19 21:23:19.416181 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-19 21:23:19.416220 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-19 21:23:19.416246 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-19 21:23:19.416259 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-19 21:23:19.416270 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-19 21:23:19.416281 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-19 21:23:19.416292 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-19 21:23:19.416307 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 21:23:19.416319 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 21:23:19.416330 | orchestrator | ++ export PATH 2025-05-19 21:23:19.416341 | orchestrator | ++ '[' -n '' ']' 2025-05-19 21:23:19.416352 | orchestrator | ++ '[' -z '' ']' 2025-05-19 21:23:19.416362 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-19 21:23:19.416373 | orchestrator | ++ PS1='(venv) ' 2025-05-19 21:23:19.416384 | orchestrator | ++ export PS1 2025-05-19 21:23:19.416395 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-19 21:23:19.416406 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-19 21:23:19.416417 | orchestrator | ++ hash -r 2025-05-19 21:23:19.416428 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-19 21:23:20.546716 | orchestrator | 2025-05-19 21:23:20.546820 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-19 21:23:20.546832 | orchestrator | 2025-05-19 21:23:20.546859 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 21:23:21.086195 | orchestrator | ok: [testbed-manager] 2025-05-19 21:23:21.086307 | orchestrator | 2025-05-19 21:23:21.086325 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 21:23:22.073577 | orchestrator | changed: [testbed-manager] 2025-05-19 21:23:22.073682 | orchestrator | 2025-05-19 21:23:22.073697 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-19 21:23:22.073709 | orchestrator | 2025-05-19 21:23:22.073719 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:23:24.514931 | orchestrator | ok: [testbed-manager] 2025-05-19 21:23:24.515095 | orchestrator | 2025-05-19 21:23:24.515114 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-19 21:23:29.355930 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-19 21:23:29.356104 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.7.2) 2025-05-19 21:23:29.356122 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:reef) 2025-05-19 21:23:29.356137 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:latest) 2025-05-19 21:23:29.356148 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:2024.2) 2025-05-19 21:23:29.356159 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.3-alpine) 2025-05-19 21:23:29.356171 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.2.2) 2025-05-19 21:23:29.356182 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:latest) 2025-05-19 21:23:29.356192 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:latest) 2025-05-19 21:23:29.356203 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.9-alpine) 2025-05-19 21:23:29.356214 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.4.0) 2025-05-19 21:23:29.356225 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.19.3) 2025-05-19 21:23:29.356262 | orchestrator | 2025-05-19 21:23:29.356275 | orchestrator | TASK [Check status] ************************************************************ 2025-05-19 21:24:45.085968 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 21:24:45.086225 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-19 21:24:45.086250 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-19 21:24:45.086269 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-19 21:24:45.086304 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j414289844653.1544', 'results_file': '/home/dragon/.ansible_async/j414289844653.1544', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086334 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j823775159367.1569', 'results_file': '/home/dragon/.ansible_async/j823775159367.1569', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.7.2', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086357 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 21:24:45.086376 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j811264727045.1594', 'results_file': '/home/dragon/.ansible_async/j811264727045.1594', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:reef', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086394 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j281495543242.1626', 'results_file': '/home/dragon/.ansible_async/j281495543242.1626', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:latest', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086412 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 21:24:45.086442 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j78114633627.1658', 'results_file': '/home/dragon/.ansible_async/j78114633627.1658', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:2024.2', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086460 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j55659100761.1690', 'results_file': '/home/dragon/.ansible_async/j55659100761.1690', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.3-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086479 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-19 21:24:45.086500 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j633815820519.1723', 'results_file': '/home/dragon/.ansible_async/j633815820519.1723', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.2.2', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086536 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j384708890072.1756', 'results_file': '/home/dragon/.ansible_async/j384708890072.1756', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:latest', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086555 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j941600381086.1788', 'results_file': '/home/dragon/.ansible_async/j941600381086.1788', 'changed': True, 'item': 'registry.osism.tech/osism/osism:latest', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086571 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j704273569651.1820', 'results_file': '/home/dragon/.ansible_async/j704273569651.1820', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.9-alpine', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086590 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j180809506590.1858', 'results_file': '/home/dragon/.ansible_async/j180809506590.1858', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.4.0', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086640 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j689877500125.1893', 'results_file': '/home/dragon/.ansible_async/j689877500125.1893', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.19.3', 'ansible_loop_var': 'item'}) 2025-05-19 21:24:45.086659 | orchestrator | 2025-05-19 21:24:45.086677 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-19 21:24:45.129497 | orchestrator | ok: [testbed-manager] 2025-05-19 21:24:45.129597 | orchestrator | 2025-05-19 21:24:45.129610 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-19 21:24:45.566421 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:45.566528 | orchestrator | 2025-05-19 21:24:45.566545 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-19 21:24:45.894570 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:45.894698 | orchestrator | 2025-05-19 21:24:45.894728 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-19 21:24:46.228836 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:46.228937 | orchestrator | 2025-05-19 21:24:46.228953 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-19 21:24:46.291579 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:24:46.291674 | orchestrator | 2025-05-19 21:24:46.291688 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-19 21:24:46.600787 | orchestrator | ok: [testbed-manager] 2025-05-19 21:24:46.600889 | orchestrator | 2025-05-19 21:24:46.600905 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-19 21:24:46.706356 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:24:46.706464 | orchestrator | 2025-05-19 21:24:46.706480 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-19 21:24:46.706493 | orchestrator | 2025-05-19 21:24:46.706504 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:24:48.422694 | orchestrator | ok: [testbed-manager] 2025-05-19 21:24:48.422812 | orchestrator | 2025-05-19 21:24:48.422829 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-19 21:24:48.507805 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-19 21:24:48.507893 | orchestrator | 2025-05-19 21:24:48.507906 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-19 21:24:48.562287 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-19 21:24:48.562353 | orchestrator | 2025-05-19 21:24:48.562367 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-19 21:24:49.642660 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-19 21:24:49.642767 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-19 21:24:49.642783 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-19 21:24:49.642800 | orchestrator | 2025-05-19 21:24:49.642813 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-19 21:24:51.410522 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-19 21:24:51.410642 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-19 21:24:51.410659 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-19 21:24:51.410673 | orchestrator | 2025-05-19 21:24:51.410708 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-19 21:24:52.038873 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:24:52.038976 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:52.038992 | orchestrator | 2025-05-19 21:24:52.039005 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-19 21:24:52.677579 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:24:52.677686 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:52.677727 | orchestrator | 2025-05-19 21:24:52.677740 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-19 21:24:52.731998 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:24:52.732074 | orchestrator | 2025-05-19 21:24:52.732088 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-19 21:24:53.079928 | orchestrator | ok: [testbed-manager] 2025-05-19 21:24:53.080024 | orchestrator | 2025-05-19 21:24:53.080064 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-19 21:24:53.131846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-19 21:24:53.131960 | orchestrator | 2025-05-19 21:24:53.131977 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-19 21:24:54.165120 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:54.165232 | orchestrator | 2025-05-19 21:24:54.165248 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-19 21:24:54.952673 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:54.952804 | orchestrator | 2025-05-19 21:24:54.952823 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-19 21:24:58.207572 | orchestrator | changed: [testbed-manager] 2025-05-19 21:24:58.207689 | orchestrator | 2025-05-19 21:24:58.207706 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-19 21:24:58.331259 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-19 21:24:58.331377 | orchestrator | 2025-05-19 21:24:58.331393 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-19 21:24:58.451940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:24:58.452094 | orchestrator | 2025-05-19 21:24:58.452113 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-19 21:25:00.887299 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:00.887419 | orchestrator | 2025-05-19 21:25:00.887436 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 21:25:00.985083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-19 21:25:00.985174 | orchestrator | 2025-05-19 21:25:00.985188 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-19 21:25:02.089598 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-19 21:25:02.089700 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-19 21:25:02.089715 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-19 21:25:02.089726 | orchestrator | 2025-05-19 21:25:02.089738 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-19 21:25:02.157430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-19 21:25:02.157523 | orchestrator | 2025-05-19 21:25:02.157538 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-19 21:25:02.782188 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-19 21:25:02.782299 | orchestrator | 2025-05-19 21:25:02.782316 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-19 21:25:03.391584 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:03.391712 | orchestrator | 2025-05-19 21:25:03.391743 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 21:25:04.004731 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:25:04.004844 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:04.004861 | orchestrator | 2025-05-19 21:25:04.004874 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-19 21:25:04.462653 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:04.462758 | orchestrator | 2025-05-19 21:25:04.462775 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-19 21:25:04.794575 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:04.794685 | orchestrator | 2025-05-19 21:25:04.794734 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-19 21:25:04.838430 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:04.838523 | orchestrator | 2025-05-19 21:25:04.838537 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-19 21:25:05.447415 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:05.447517 | orchestrator | 2025-05-19 21:25:05.447533 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-19 21:25:05.514182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-19 21:25:05.514278 | orchestrator | 2025-05-19 21:25:05.514293 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-19 21:25:06.258319 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-19 21:25:06.258459 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-19 21:25:06.258477 | orchestrator | 2025-05-19 21:25:06.258515 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-19 21:25:06.893523 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-19 21:25:06.893636 | orchestrator | 2025-05-19 21:25:06.893652 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-19 21:25:07.525892 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:07.525999 | orchestrator | 2025-05-19 21:25:07.526089 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-19 21:25:07.574839 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:07.574900 | orchestrator | 2025-05-19 21:25:07.574914 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-19 21:25:08.212905 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:08.213088 | orchestrator | 2025-05-19 21:25:08.213108 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-19 21:25:09.950564 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:25:09.950700 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:25:09.950717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:25:09.950774 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:09.950790 | orchestrator | 2025-05-19 21:25:09.950803 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-19 21:25:15.692716 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-19 21:25:15.692860 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-19 21:25:15.692886 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-19 21:25:15.692903 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-19 21:25:15.692920 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-19 21:25:15.692937 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-19 21:25:15.692953 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-19 21:25:15.692970 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-19 21:25:15.692986 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-19 21:25:15.693002 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-19 21:25:15.693018 | orchestrator | 2025-05-19 21:25:15.693036 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-19 21:25:16.332940 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-19 21:25:16.333046 | orchestrator | 2025-05-19 21:25:16.333107 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-19 21:25:16.417815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-19 21:25:16.417915 | orchestrator | 2025-05-19 21:25:16.417930 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-19 21:25:17.087205 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:17.087314 | orchestrator | 2025-05-19 21:25:17.087340 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-19 21:25:17.684839 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:17.684943 | orchestrator | 2025-05-19 21:25:17.684959 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-19 21:25:18.413977 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:18.414186 | orchestrator | 2025-05-19 21:25:18.414206 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-19 21:25:20.665473 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:20.665590 | orchestrator | 2025-05-19 21:25:20.665608 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-19 21:25:21.671192 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:21.671289 | orchestrator | 2025-05-19 21:25:21.671306 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-19 21:25:43.759508 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-19 21:25:43.759607 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:43.759625 | orchestrator | 2025-05-19 21:25:43.759638 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-19 21:25:43.821642 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:43.821721 | orchestrator | 2025-05-19 21:25:43.821735 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-19 21:25:43.821748 | orchestrator | 2025-05-19 21:25:43.821759 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-19 21:25:43.864490 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:43.864525 | orchestrator | 2025-05-19 21:25:43.864536 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 21:25:43.917825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-19 21:25:43.917867 | orchestrator | 2025-05-19 21:25:43.917879 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-19 21:25:44.683135 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:44.683221 | orchestrator | 2025-05-19 21:25:44.683238 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-19 21:25:44.740895 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:44.740970 | orchestrator | 2025-05-19 21:25:44.740983 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-19 21:25:44.779571 | orchestrator | ok: [testbed-manager] => { 2025-05-19 21:25:44.779642 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-19 21:25:44.779659 | orchestrator | } 2025-05-19 21:25:44.779671 | orchestrator | 2025-05-19 21:25:44.779683 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-19 21:25:45.304496 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:45.304565 | orchestrator | 2025-05-19 21:25:45.304576 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-19 21:25:45.976659 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:45.976754 | orchestrator | 2025-05-19 21:25:45.976771 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-19 21:25:46.026845 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:46.026906 | orchestrator | 2025-05-19 21:25:46.026915 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-19 21:25:46.067872 | orchestrator | ok: [testbed-manager] => { 2025-05-19 21:25:46.067949 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-19 21:25:46.067964 | orchestrator | } 2025-05-19 21:25:46.067976 | orchestrator | 2025-05-19 21:25:46.067988 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-19 21:25:46.121384 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.121442 | orchestrator | 2025-05-19 21:25:46.121463 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-19 21:25:46.159981 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.160026 | orchestrator | 2025-05-19 21:25:46.160038 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-19 21:25:46.204880 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.204933 | orchestrator | 2025-05-19 21:25:46.204945 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-19 21:25:46.314573 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.314656 | orchestrator | 2025-05-19 21:25:46.314670 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-19 21:25:46.343021 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.343067 | orchestrator | 2025-05-19 21:25:46.343101 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-19 21:25:46.371264 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:25:46.371322 | orchestrator | 2025-05-19 21:25:46.371335 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-19 21:25:47.394273 | orchestrator | changed: [testbed-manager] 2025-05-19 21:25:47.394380 | orchestrator | 2025-05-19 21:25:47.394397 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-19 21:25:47.449809 | orchestrator | ok: [testbed-manager] 2025-05-19 21:25:47.449888 | orchestrator | 2025-05-19 21:25:47.449902 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-19 21:26:47.496385 | orchestrator | Pausing for 60 seconds 2025-05-19 21:26:47.496477 | orchestrator | changed: [testbed-manager] 2025-05-19 21:26:47.496493 | orchestrator | 2025-05-19 21:26:47.496506 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-19 21:26:47.542289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-19 21:26:47.542329 | orchestrator | 2025-05-19 21:26:47.542341 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-19 21:30:06.254228 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-19 21:30:06.254317 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-19 21:30:06.254332 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-19 21:30:06.254345 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-19 21:30:06.254356 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-19 21:30:06.254368 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-19 21:30:06.254379 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-19 21:30:06.254392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-19 21:30:06.254404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-19 21:30:06.254416 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-19 21:30:06.254423 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-19 21:30:06.254430 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-19 21:30:06.254436 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-19 21:30:06.254444 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-19 21:30:06.254455 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-19 21:30:06.254466 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-19 21:30:06.254476 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-19 21:30:06.254504 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-19 21:30:06.254516 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-19 21:30:06.254527 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:06.254563 | orchestrator | 2025-05-19 21:30:06.254576 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-19 21:30:06.254587 | orchestrator | 2025-05-19 21:30:06.254598 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:30:08.302495 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:08.302590 | orchestrator | 2025-05-19 21:30:08.302608 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-19 21:30:08.422652 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-19 21:30:08.422793 | orchestrator | 2025-05-19 21:30:08.422814 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-19 21:30:08.481213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:30:08.481299 | orchestrator | 2025-05-19 21:30:08.481313 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-19 21:30:10.313146 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:10.313301 | orchestrator | 2025-05-19 21:30:10.313332 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-19 21:30:10.376581 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:10.376696 | orchestrator | 2025-05-19 21:30:10.376712 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-19 21:30:10.479326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-19 21:30:10.479443 | orchestrator | 2025-05-19 21:30:10.479455 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-19 21:30:13.376554 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-19 21:30:13.376640 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-19 21:30:13.376653 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-19 21:30:13.376665 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-19 21:30:13.376676 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-19 21:30:13.376687 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-19 21:30:13.376698 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-19 21:30:13.376728 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-19 21:30:13.376740 | orchestrator | 2025-05-19 21:30:13.376756 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-19 21:30:13.997463 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:13.997567 | orchestrator | 2025-05-19 21:30:13.997584 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-19 21:30:14.080158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-19 21:30:14.080278 | orchestrator | 2025-05-19 21:30:14.080297 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-19 21:30:15.328839 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-19 21:30:15.328967 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-19 21:30:15.328994 | orchestrator | 2025-05-19 21:30:15.329014 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-19 21:30:15.969445 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:15.969520 | orchestrator | 2025-05-19 21:30:15.969527 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-19 21:30:16.020798 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:30:16.020879 | orchestrator | 2025-05-19 21:30:16.020890 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-19 21:30:16.082046 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-19 21:30:16.082120 | orchestrator | 2025-05-19 21:30:16.082128 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-19 21:30:17.462866 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:30:17.462999 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:30:17.463054 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:17.463069 | orchestrator | 2025-05-19 21:30:17.463082 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-19 21:30:18.096387 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:18.096486 | orchestrator | 2025-05-19 21:30:18.096504 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-19 21:30:18.168416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-19 21:30:18.168506 | orchestrator | 2025-05-19 21:30:18.168530 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-19 21:30:19.380082 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:30:19.380196 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:30:19.380213 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:19.380227 | orchestrator | 2025-05-19 21:30:19.380240 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-19 21:30:20.032106 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:20.032176 | orchestrator | 2025-05-19 21:30:20.032187 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-19 21:30:20.140885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-19 21:30:20.141013 | orchestrator | 2025-05-19 21:30:20.141040 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-19 21:30:20.816815 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:20.816919 | orchestrator | 2025-05-19 21:30:20.816936 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-19 21:30:21.253585 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:21.253733 | orchestrator | 2025-05-19 21:30:21.253760 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-19 21:30:22.515993 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-19 21:30:22.516134 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-19 21:30:22.516162 | orchestrator | 2025-05-19 21:30:22.516184 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-19 21:30:23.204191 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:23.204284 | orchestrator | 2025-05-19 21:30:23.204299 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-19 21:30:23.592263 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:23.592370 | orchestrator | 2025-05-19 21:30:23.592387 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-19 21:30:23.964299 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:23.964405 | orchestrator | 2025-05-19 21:30:23.964422 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-19 21:30:24.021528 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:30:24.021745 | orchestrator | 2025-05-19 21:30:24.021772 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-19 21:30:24.096952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-19 21:30:24.097067 | orchestrator | 2025-05-19 21:30:24.097082 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-19 21:30:24.147854 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:24.147950 | orchestrator | 2025-05-19 21:30:24.147965 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-19 21:30:26.259289 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-19 21:30:26.259409 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-19 21:30:26.259424 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-19 21:30:26.259436 | orchestrator | 2025-05-19 21:30:26.259449 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-19 21:30:26.950751 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:26.950856 | orchestrator | 2025-05-19 21:30:26.950876 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-19 21:30:27.680830 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:27.680955 | orchestrator | 2025-05-19 21:30:27.680973 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-19 21:30:28.485870 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:28.485973 | orchestrator | 2025-05-19 21:30:28.485990 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-19 21:30:28.565167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-19 21:30:28.565264 | orchestrator | 2025-05-19 21:30:28.565279 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-19 21:30:28.614160 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:28.614285 | orchestrator | 2025-05-19 21:30:28.614302 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-19 21:30:29.330129 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-19 21:30:29.330224 | orchestrator | 2025-05-19 21:30:29.330240 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-19 21:30:29.407831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-19 21:30:29.407923 | orchestrator | 2025-05-19 21:30:29.407942 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-19 21:30:30.091449 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:30.091550 | orchestrator | 2025-05-19 21:30:30.091566 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-19 21:30:30.730566 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:30.730747 | orchestrator | 2025-05-19 21:30:30.730781 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-19 21:30:30.777948 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:30:30.778128 | orchestrator | 2025-05-19 21:30:30.778158 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-19 21:30:30.837708 | orchestrator | ok: [testbed-manager] 2025-05-19 21:30:30.837788 | orchestrator | 2025-05-19 21:30:30.837799 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-19 21:30:31.664305 | orchestrator | changed: [testbed-manager] 2025-05-19 21:30:31.664449 | orchestrator | 2025-05-19 21:30:31.664473 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-19 21:31:17.787390 | orchestrator | changed: [testbed-manager] 2025-05-19 21:31:17.787604 | orchestrator | 2025-05-19 21:31:17.787632 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-19 21:31:18.479855 | orchestrator | ok: [testbed-manager] 2025-05-19 21:31:18.479965 | orchestrator | 2025-05-19 21:31:18.479982 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-19 21:31:21.478074 | orchestrator | changed: [testbed-manager] 2025-05-19 21:31:21.478184 | orchestrator | 2025-05-19 21:31:21.478203 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-19 21:31:21.535303 | orchestrator | ok: [testbed-manager] 2025-05-19 21:31:21.535431 | orchestrator | 2025-05-19 21:31:21.535447 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 21:31:21.535459 | orchestrator | 2025-05-19 21:31:21.535471 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-19 21:31:21.603348 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:31:21.603511 | orchestrator | 2025-05-19 21:31:21.603529 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-19 21:32:21.660980 | orchestrator | Pausing for 60 seconds 2025-05-19 21:32:21.661105 | orchestrator | changed: [testbed-manager] 2025-05-19 21:32:21.661122 | orchestrator | 2025-05-19 21:32:21.661174 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-19 21:32:26.049428 | orchestrator | changed: [testbed-manager] 2025-05-19 21:32:26.049542 | orchestrator | 2025-05-19 21:32:26.049560 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-19 21:33:07.588422 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-19 21:33:07.588535 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-19 21:33:07.588580 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:07.588595 | orchestrator | 2025-05-19 21:33:07.588607 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-19 21:33:15.902459 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:15.902568 | orchestrator | 2025-05-19 21:33:15.902585 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-19 21:33:15.991523 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-19 21:33:15.991617 | orchestrator | 2025-05-19 21:33:15.991633 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-19 21:33:15.991646 | orchestrator | 2025-05-19 21:33:15.991657 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-19 21:33:16.047621 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:33:16.047706 | orchestrator | 2025-05-19 21:33:16.047721 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:33:16.047735 | orchestrator | testbed-manager : ok=109 changed=57 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-19 21:33:16.047746 | orchestrator | 2025-05-19 21:33:16.155159 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-19 21:33:16.155253 | orchestrator | + deactivate 2025-05-19 21:33:16.155269 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-19 21:33:16.155283 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-19 21:33:16.155294 | orchestrator | + export PATH 2025-05-19 21:33:16.155306 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-19 21:33:16.155319 | orchestrator | + '[' -n '' ']' 2025-05-19 21:33:16.155330 | orchestrator | + hash -r 2025-05-19 21:33:16.155342 | orchestrator | + '[' -n '' ']' 2025-05-19 21:33:16.155353 | orchestrator | + unset VIRTUAL_ENV 2025-05-19 21:33:16.155365 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-19 21:33:16.155377 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-19 21:33:16.155388 | orchestrator | + unset -f deactivate 2025-05-19 21:33:16.155400 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-19 21:33:16.162682 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 21:33:16.162742 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 21:33:16.162762 | orchestrator | + local max_attempts=60 2025-05-19 21:33:16.162782 | orchestrator | + local name=ceph-ansible 2025-05-19 21:33:16.162802 | orchestrator | + local attempt_num=1 2025-05-19 21:33:16.163681 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 21:33:16.198223 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:33:16.198294 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 21:33:16.198314 | orchestrator | + local max_attempts=60 2025-05-19 21:33:16.198333 | orchestrator | + local name=kolla-ansible 2025-05-19 21:33:16.198352 | orchestrator | + local attempt_num=1 2025-05-19 21:33:16.198673 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 21:33:16.228524 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:33:16.228588 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 21:33:16.228602 | orchestrator | + local max_attempts=60 2025-05-19 21:33:16.228615 | orchestrator | + local name=osism-ansible 2025-05-19 21:33:16.228626 | orchestrator | + local attempt_num=1 2025-05-19 21:33:16.229198 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 21:33:16.259057 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:33:16.259130 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 21:33:16.259145 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 21:33:16.924945 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-19 21:33:17.111664 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 21:33:17.111763 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111803 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111815 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Restarting (0) 3 seconds ago 2025-05-19 21:33:17.111827 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-19 21:33:17.111838 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111849 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" conductor About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111859 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111870 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-05-19 21:33:17.111881 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111892 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-19 21:33:17.111902 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" netbox About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111946 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111957 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-19 21:33:17.111967 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111978 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111989 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.111999 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-05-19 21:33:17.116377 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-19 21:33:17.261883 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-19 21:33:17.262004 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" netbox 7 minutes ago Up 7 minutes (healthy) 2025-05-19 21:33:17.262093 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" netbox-worker 7 minutes ago Up 3 minutes (healthy) 2025-05-19 21:33:17.262108 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" postgres 7 minutes ago Up 7 minutes (healthy) 5432/tcp 2025-05-19 21:33:17.262135 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" redis 7 minutes ago Up 7 minutes (healthy) 6379/tcp 2025-05-19 21:33:17.269550 | orchestrator | ++ semver latest 7.0.0 2025-05-19 21:33:17.314964 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 21:33:17.315024 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 21:33:17.315040 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-19 21:33:17.317782 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-19 21:33:19.204112 | orchestrator | 2025-05-19 21:33:19 | INFO  | Task 446bfe6a-38e4-47aa-b12e-a7894efc775c (resolvconf) was prepared for execution. 2025-05-19 21:33:19.204217 | orchestrator | 2025-05-19 21:33:19 | INFO  | It takes a moment until task 446bfe6a-38e4-47aa-b12e-a7894efc775c (resolvconf) has been started and output is visible here. 2025-05-19 21:33:22.990310 | orchestrator | 2025-05-19 21:33:22.990438 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-19 21:33:22.990764 | orchestrator | 2025-05-19 21:33:22.992168 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:33:22.992566 | orchestrator | Monday 19 May 2025 21:33:22 +0000 (0:00:00.140) 0:00:00.140 ************ 2025-05-19 21:33:26.825160 | orchestrator | ok: [testbed-manager] 2025-05-19 21:33:26.825268 | orchestrator | 2025-05-19 21:33:26.825893 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 21:33:26.826609 | orchestrator | Monday 19 May 2025 21:33:26 +0000 (0:00:03.838) 0:00:03.978 ************ 2025-05-19 21:33:26.879798 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:33:26.881024 | orchestrator | 2025-05-19 21:33:26.881706 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 21:33:26.882318 | orchestrator | Monday 19 May 2025 21:33:26 +0000 (0:00:00.056) 0:00:04.035 ************ 2025-05-19 21:33:26.967442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-19 21:33:26.967571 | orchestrator | 2025-05-19 21:33:26.967645 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 21:33:26.968451 | orchestrator | Monday 19 May 2025 21:33:26 +0000 (0:00:00.086) 0:00:04.122 ************ 2025-05-19 21:33:27.054174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:33:27.054633 | orchestrator | 2025-05-19 21:33:27.055249 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 21:33:27.056249 | orchestrator | Monday 19 May 2025 21:33:27 +0000 (0:00:00.087) 0:00:04.209 ************ 2025-05-19 21:33:28.077554 | orchestrator | ok: [testbed-manager] 2025-05-19 21:33:28.077656 | orchestrator | 2025-05-19 21:33:28.078322 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 21:33:28.079029 | orchestrator | Monday 19 May 2025 21:33:28 +0000 (0:00:01.021) 0:00:05.231 ************ 2025-05-19 21:33:28.139516 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:33:28.140187 | orchestrator | 2025-05-19 21:33:28.141001 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 21:33:28.141712 | orchestrator | Monday 19 May 2025 21:33:28 +0000 (0:00:00.063) 0:00:05.294 ************ 2025-05-19 21:33:28.623045 | orchestrator | ok: [testbed-manager] 2025-05-19 21:33:28.623262 | orchestrator | 2025-05-19 21:33:28.624051 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 21:33:28.624806 | orchestrator | Monday 19 May 2025 21:33:28 +0000 (0:00:00.483) 0:00:05.777 ************ 2025-05-19 21:33:28.694684 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:33:28.694904 | orchestrator | 2025-05-19 21:33:28.695746 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 21:33:28.695993 | orchestrator | Monday 19 May 2025 21:33:28 +0000 (0:00:00.072) 0:00:05.850 ************ 2025-05-19 21:33:29.248115 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:29.248215 | orchestrator | 2025-05-19 21:33:29.248230 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 21:33:29.248849 | orchestrator | Monday 19 May 2025 21:33:29 +0000 (0:00:00.550) 0:00:06.401 ************ 2025-05-19 21:33:30.358239 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:30.358325 | orchestrator | 2025-05-19 21:33:30.359474 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 21:33:30.360411 | orchestrator | Monday 19 May 2025 21:33:30 +0000 (0:00:01.108) 0:00:07.509 ************ 2025-05-19 21:33:31.281362 | orchestrator | ok: [testbed-manager] 2025-05-19 21:33:31.282303 | orchestrator | 2025-05-19 21:33:31.283154 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 21:33:31.283804 | orchestrator | Monday 19 May 2025 21:33:31 +0000 (0:00:00.923) 0:00:08.433 ************ 2025-05-19 21:33:31.371295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-19 21:33:31.371438 | orchestrator | 2025-05-19 21:33:31.372074 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 21:33:31.372734 | orchestrator | Monday 19 May 2025 21:33:31 +0000 (0:00:00.090) 0:00:08.523 ************ 2025-05-19 21:33:32.491472 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:32.491655 | orchestrator | 2025-05-19 21:33:32.491691 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:33:32.491707 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 21:33:32.491721 | orchestrator | 2025-05-19 21:33:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:33:32.491733 | orchestrator | 2025-05-19 21:33:32 | INFO  | Please wait and do not abort execution. 2025-05-19 21:33:32.491822 | orchestrator | 2025-05-19 21:33:32.494902 | orchestrator | 2025-05-19 21:33:32.496506 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:33:32.496528 | orchestrator | Monday 19 May 2025 21:33:32 +0000 (0:00:01.118) 0:00:09.642 ************ 2025-05-19 21:33:32.497672 | orchestrator | =============================================================================== 2025-05-19 21:33:32.499026 | orchestrator | Gathering Facts --------------------------------------------------------- 3.84s 2025-05-19 21:33:32.499717 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.12s 2025-05-19 21:33:32.501026 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-05-19 21:33:32.502106 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.02s 2025-05-19 21:33:32.505645 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-05-19 21:33:32.505676 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-05-19 21:33:32.506629 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-05-19 21:33:32.507212 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-05-19 21:33:32.507809 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-19 21:33:32.508440 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-05-19 21:33:32.509199 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-05-19 21:33:32.510342 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-19 21:33:32.510368 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-05-19 21:33:32.907460 | orchestrator | + osism apply sshconfig 2025-05-19 21:33:34.584433 | orchestrator | 2025-05-19 21:33:34 | INFO  | Task 2eae8027-c3db-4dcb-8608-0796506e44c1 (sshconfig) was prepared for execution. 2025-05-19 21:33:34.584551 | orchestrator | 2025-05-19 21:33:34 | INFO  | It takes a moment until task 2eae8027-c3db-4dcb-8608-0796506e44c1 (sshconfig) has been started and output is visible here. 2025-05-19 21:33:38.233807 | orchestrator | 2025-05-19 21:33:38.234413 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-19 21:33:38.235013 | orchestrator | 2025-05-19 21:33:38.235739 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-19 21:33:38.236324 | orchestrator | Monday 19 May 2025 21:33:38 +0000 (0:00:00.123) 0:00:00.123 ************ 2025-05-19 21:33:38.731956 | orchestrator | ok: [testbed-manager] 2025-05-19 21:33:38.732989 | orchestrator | 2025-05-19 21:33:38.733043 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-19 21:33:38.733704 | orchestrator | Monday 19 May 2025 21:33:38 +0000 (0:00:00.500) 0:00:00.623 ************ 2025-05-19 21:33:39.153121 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:39.154135 | orchestrator | 2025-05-19 21:33:39.155149 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-19 21:33:39.155924 | orchestrator | Monday 19 May 2025 21:33:39 +0000 (0:00:00.421) 0:00:01.044 ************ 2025-05-19 21:33:44.205750 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-19 21:33:44.206137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-19 21:33:44.206920 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-19 21:33:44.207954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-19 21:33:44.208474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-19 21:33:44.208962 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-19 21:33:44.209849 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-19 21:33:44.210348 | orchestrator | 2025-05-19 21:33:44.211882 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-19 21:33:44.212233 | orchestrator | Monday 19 May 2025 21:33:44 +0000 (0:00:05.050) 0:00:06.095 ************ 2025-05-19 21:33:44.267612 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:33:44.267707 | orchestrator | 2025-05-19 21:33:44.267920 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-19 21:33:44.268667 | orchestrator | Monday 19 May 2025 21:33:44 +0000 (0:00:00.063) 0:00:06.158 ************ 2025-05-19 21:33:44.825545 | orchestrator | changed: [testbed-manager] 2025-05-19 21:33:44.828063 | orchestrator | 2025-05-19 21:33:44.828096 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:33:44.828327 | orchestrator | 2025-05-19 21:33:44 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:33:44.828342 | orchestrator | 2025-05-19 21:33:44 | INFO  | Please wait and do not abort execution. 2025-05-19 21:33:44.828963 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:33:44.829142 | orchestrator | 2025-05-19 21:33:44.829663 | orchestrator | 2025-05-19 21:33:44.831048 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:33:44.831205 | orchestrator | Monday 19 May 2025 21:33:44 +0000 (0:00:00.559) 0:00:06.717 ************ 2025-05-19 21:33:44.831616 | orchestrator | =============================================================================== 2025-05-19 21:33:44.832939 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.05s 2025-05-19 21:33:44.834090 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2025-05-19 21:33:44.835763 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.50s 2025-05-19 21:33:44.835776 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2025-05-19 21:33:44.835782 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-05-19 21:33:45.235815 | orchestrator | + osism apply known-hosts 2025-05-19 21:33:46.886764 | orchestrator | 2025-05-19 21:33:46 | INFO  | Task a2069c92-c381-4bab-8259-270d59e35bb9 (known-hosts) was prepared for execution. 2025-05-19 21:33:46.886889 | orchestrator | 2025-05-19 21:33:46 | INFO  | It takes a moment until task a2069c92-c381-4bab-8259-270d59e35bb9 (known-hosts) has been started and output is visible here. 2025-05-19 21:33:50.713688 | orchestrator | 2025-05-19 21:33:50.714301 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-19 21:33:50.715186 | orchestrator | 2025-05-19 21:33:50.715361 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-19 21:33:50.717049 | orchestrator | Monday 19 May 2025 21:33:50 +0000 (0:00:00.158) 0:00:00.158 ************ 2025-05-19 21:33:56.348965 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 21:33:56.349332 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 21:33:56.349930 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 21:33:56.350711 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 21:33:56.352376 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 21:33:56.354340 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 21:33:56.355252 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 21:33:56.356416 | orchestrator | 2025-05-19 21:33:56.357109 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-19 21:33:56.357809 | orchestrator | Monday 19 May 2025 21:33:56 +0000 (0:00:05.635) 0:00:05.793 ************ 2025-05-19 21:33:56.516891 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 21:33:56.517905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 21:33:56.519288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 21:33:56.520166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 21:33:56.521628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 21:33:56.521666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 21:33:56.522341 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 21:33:56.522961 | orchestrator | 2025-05-19 21:33:56.523383 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:33:56.524001 | orchestrator | Monday 19 May 2025 21:33:56 +0000 (0:00:00.170) 0:00:05.964 ************ 2025-05-19 21:33:57.662353 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3F67PsjqRFtyrzHA2027ejP1/ReUltHQZ4HEXJPtmr) 2025-05-19 21:33:57.663488 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEkbkFt6Ln0GzZ3q/PJ59IEBCLPEnjg/7RebrQQzS+Wl/+zqCfC+YRWYSFVJ7dbsTQC7qH2ln2gR2MwpJyNvCzDQsHZ+QMxeXlMx6MSrnGAyodXnamw59EoFT2moW06xiKM9InBkDY4fbkbNgm6iklF/ko9fG0lKNydyJm/naiDeCMDB68/RCWGOR15AsRJY/nUPGYWNgXujq2lmPZTwbe4fRIomSNcc937qxafvbDGc9Opq5I035IxQ3rF4iQfHjBIKOiM8V+9dPltBXpmZ3O4MXtHgKVrYhCVsBGaxOWDzglCBrLcdMtFz5j0640EtvX0IVSsg8zP8LTMxac1Pnrfpc6t0E5OtQJEUPNsPYDvWZnmMfEMKz7HDHVp8Exi0H5rdDjN615K105Kk4I3pRFmbezB/ZUf7i8ifMKOh4nsyPJAlXPYMNH/wtAVBwGQGhI9myWI52EZ0G2jEJXIyqIWGVm4utNxTzOlRDLueDkYDJEH/H/wjOwoAPaUvi0oWk=) 2025-05-19 21:33:57.663841 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0ltcgCWvKGie5OEJkBqCpINWqMIJMh+l6FbOkCbyCwbRCgLO34nGn7WFjxjyIVvrxEBIcEiGZRq2lZs9Ccb2Q=) 2025-05-19 21:33:57.664826 | orchestrator | 2025-05-19 21:33:57.665174 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:33:57.665945 | orchestrator | Monday 19 May 2025 21:33:57 +0000 (0:00:01.142) 0:00:07.107 ************ 2025-05-19 21:33:58.704063 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIaFSs/422oym+DNQTMCrVYr7q5xCarVfPrWosv9NgLs/PQ9XbI0uScNVL8w1h7CVnscageprAWVg8/txPQkGSWEtElmQOblWWZT3H7TKnKV4ExtoQaGXOLT6lNhMTlfYddDpZu+zpagZx3EwY8XaQnTwF5OLdUNqKyaG/GzF7LcavudjCUNXGW17oQALUSRmWbZqOWNkHiz6HKbIiAqKXhHC1xDGqkyKcRBwnszsHZrZHqBIYrNP8L87/o9skuOj7TJK1Qw1hybFyrz20o17+HxRSw7XX6mZVg8ssc4ZYUCbvoRzFhBvvuyPc50P+DNli8GgrLzBugKvmwz66+gX6a49RyMNbWSPlz3TPcrZ2oyI56Ut3DrcPpgtyZGoY4MJllFxyqu3f0ncYkOQ9hdBKHjd8rowFGoO0QIBqdyZUcMrHzkjisukilqOEGs/B3FU3kIEEopknSv2IYAohePLeLRSNnHY0tzvsor974vCZboanoGkpjwJc+t0/37b7b8s=) 2025-05-19 21:33:58.704302 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSCdwctH0jcNVs2N8i0dAkYYvyc5wnkoiJCAVVAB2VxZcJJ8KugV+RuSk6oPU8hQF+k06KMNXW1FkNwKUEDU8Y=) 2025-05-19 21:33:58.705143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHudDtz3nNYBcjQER+YMH10Nv14PCgureW5l3Mlzxvs) 2025-05-19 21:33:58.705526 | orchestrator | 2025-05-19 21:33:58.706204 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:33:58.707583 | orchestrator | Monday 19 May 2025 21:33:58 +0000 (0:00:01.042) 0:00:08.150 ************ 2025-05-19 21:33:59.711322 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTEvFgxaGX0sJjp+BQu5AMGHf2jPQW0QWLKbr7SaGv9nCKl/HYqor68WuYajkSawl4hFMm/Cym3UiAoAgvbQYk=) 2025-05-19 21:33:59.711815 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO99S6k+9ihh4+dLyIg4lCklL3jOfSESzN01Q+mLSigS) 2025-05-19 21:33:59.713203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO5qhD2I3n9+NKVXo13Te9nXkvLWozWepBaEjQs1EAjsApDKVrHEVuAUC1FQOFLO0TkV5QgpdgPFzXlxEPKYg2YArq0GE67192wAstYbKUxIqQeT4Ya78ZSB6Hp6ry+IJCRHhdpogVrz3yn6uOlTnXcEqUPb0euaY2gVH4c+3S1P6TrkXGFiZxNaRSD/TbuLx8NyaG9HcCQQmeFV+OawtLOri21P9T86ooSGP75ros0qfAp5AKx58iXITUtInet8Uosq8uDQSnB8T08MAxDWtehw8DPVq6M32JFk0jK8m/toNGnJPQucDZ02jwRKh9YBw39/Cvk8J3eqHKL2+o5N9E8lSlNxCNU/ft03OUkQkn+ybXApKdPzVC3Trd2iThhRohlBbx2iT97Z2ZtCvMVwe5kuKXDFBewYbbZn7vSZsqNiWa3bWDLs7zeOjuOrJ4K7zRyr3gAXnn1u9m3V1/LEIQ6G4fGg9lCsBPobuxCndwfGcCJ8paep3q49l81ZFTd4k=) 2025-05-19 21:33:59.713365 | orchestrator | 2025-05-19 21:33:59.714256 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:33:59.714703 | orchestrator | Monday 19 May 2025 21:33:59 +0000 (0:00:01.006) 0:00:09.156 ************ 2025-05-19 21:34:00.741473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDScS8VDtnNFs4lv96LybZfqhfLCClQcanKKECmp4YfQCSp0/zS8+vqPmuROS25x1KgEyobEuSQLGpllTax//eQfoluurhlyEevGa8A9BCxMpmnUIU7VKj8fB0qTATDQJvSOUQWT5TeGFZPczEeIFlZMmwIlllK44AjLgiEljUWQ+YyO0lk4vZjoE3xiiJJwdojhU9pMtuMuNou3MQzmVkE7zozU6qJ3T/xgjU5KhudnFBPcc+LDsSAfyGN4OZF9JqZ3Yh1iVlgtj7M90mYbvEI/HjUzm9gu4qnhWhxe6xHXk2NGYE9UHxMfT78oSOspkdJGrycETRcvmfqoY4Uk0/Njiv/TKguVjm76Co6vJ9pYExUwjBfIeQxshWvWXhjgaNiF5B4AwCpNBwCZUXuR+CKUSg9lr+V8YbEnGI0rjm5gvan82LV2WVlfPWystGLpdV+QwzfsobxWEAOYw1VKF/iksze0vB/0Uy+og5JOTx5xS9LGvO1WOdDDwFaxrqiT58=) 2025-05-19 21:34:00.741617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA0hUIBgLF6x2VwFStGS+NATNpVz9xO5raDIXJ/mymoKIkMO+NFwtgEa060/qpxzdbdJUiN9Ro4FyMAsD8h/9Ng=) 2025-05-19 21:34:00.742992 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrbHXev2E2LSuPsPeRvL6LWxUrwY+MvChC64+bFUUng) 2025-05-19 21:34:00.743646 | orchestrator | 2025-05-19 21:34:00.744355 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:00.744905 | orchestrator | Monday 19 May 2025 21:34:00 +0000 (0:00:01.030) 0:00:10.186 ************ 2025-05-19 21:34:01.782294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBE2uc4D9WzYcTJ4Ich7nFvrrPT1R+pp/nNgBHbb9K14) 2025-05-19 21:34:01.782807 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCB5EoJw9IHoQRPNvvfKYz9bLdM/HYDHaAhr3evtliDiQMktYtv9z4St9AtXoVI2zN+3HhWcoCL6mp0k8lKEPMS5fxS1HkFYpvs/GpkFFY0IiNtEG3wnSUV2X1huBdfV7dk27pCZprzZ/bMMq47DKne0R1CtKlX4aUbMCDD/Vwq0NEvv421TlUq16rPMlfSKAJ6NldREvIDSLhvVyLdjGCu981aNzIt3n0lOGqUP8/ZiI7ZYzsQivF6+xS/KMnRnXJy9UxISwfZtezxbyVoIqkkS4OfGicScsdQE/4yp0WwiVx2dLVoMAne92SNnOIuPXly/VuHvnnyE6utut4V/03lziCk2Z+TMhlGaLLJhC034TtduxAPqaN4vmTjFJ0ozmxe7Xjdy5WtudptCbss5z1q6sMyivSNwqm/Pwz2Xo5NQCGYMocEl/kPr1IMQkCd/dJwOxwpc4bV0aHiZ1Vf0d0p224oMIVXt8nPrcmFW8sp+wZXYpRveHJUk64y3sbRk0M=) 2025-05-19 21:34:01.783559 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbiw+Y+5PtuxMKRLcVZCf6hh65Zdz7KIiI8uU4aZtYDuhPpQ/JlJaHnHqNOdwlfC0LQrzn89CLVE4nPqybles4=) 2025-05-19 21:34:01.784255 | orchestrator | 2025-05-19 21:34:01.784713 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:01.785037 | orchestrator | Monday 19 May 2025 21:34:01 +0000 (0:00:01.039) 0:00:11.226 ************ 2025-05-19 21:34:02.828519 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7GlzQ5C6gLxiDYCy8rqDcJDqNZhoVClPYqgPWzlElNgk8feXDGVXgPv2A6/twiMMZbkILibUDmTIi8dMz6lXnvihEmX+si2vtjClxUaDm5dYiyqWFpruavn3yGdtNaMU/XjW0bOv0IYH+sEA3jvEHW3DTbXxGpefUwdqsVbsIIrV+Ev7Urewdl4NvIO5Q6N9tUHMi+VHJSW2wdeK6Hm4nSbFgskxhE8F032Juf2Ec9YJp9k17uPzNdjosG7i1LgCpZ+fakOuN00J7O0xSN35J0wdFQKmaYYbmFm/OYIljWNaqm//N45BSBXX/zQk1NXglsb1z+W7AD91beGLxILNynuQ8VW4B4XMoeBa5JUDgWHt+ZLAC/tBtvBtKrlx6Tz4Fpq13xgJHODda0/kbuDR5rJ2s77ZkfqficqTxsLq0kH6RbGT0Rxj4vkA048wA5B9MmhHiC9BNsN9ek495F5QfY6bGTgIb7KErvsYzW0YVgkrSVwWk8zaotJmtSijxazU=) 2025-05-19 21:34:02.829205 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJhDfJfWg6CzUAhmfCsm32qI7cN1O21m5PfL7Djr58byrVpDfkcw8ZL+4sClGWovdRuK8c9EYvP/RiGwwFORkA=) 2025-05-19 21:34:02.829984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEvHfEQtEjlrXKt37BLwQg2vxA3vWWJZhcm6ZWJ16Vm) 2025-05-19 21:34:02.830680 | orchestrator | 2025-05-19 21:34:02.831621 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:02.832358 | orchestrator | Monday 19 May 2025 21:34:02 +0000 (0:00:01.047) 0:00:12.273 ************ 2025-05-19 21:34:03.846375 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDehh0xHF5g68A/8CF4aZFQh9Yg7a3W6pB70AzUdoZrPlUwhZ0GOWLfWNR7hS0f9ydzAYjMz8RV31pxYJWC+gxbPhnKzHgt6XmB90BBlUJLAO6G3W7rImQZkqc7VTQJ0rFbLKGBnqsAqs4tKEILoHe/kqjigCd9WjxjYhxubsxeHtOHS+lb/CRnjAmpADhJqMwO8l7IYC90S6+s0gNizt/RJX0y+q0cn//W5KzTBQHpfrlpNESQ6yakqmgHAj+LwwkFEH+1kYYnhOvBzaEeiYPRsHVcvBcrCbXJ2vP5vVTIY8ArG1cxc8EWjcSWyIJ3K+oGPIm2lfVZjgANtk2B8DB0O6UmhB+k4yfT2ua/HqOE6h7/m5jYTc0DU7zpQWIjZOIk3nZSkCcvwxaNRmPo1IZlECsk6vj2OQNS99jRrz5omH3z4NPESgI/SttrpUBpl7UOLOr49XE2/ps85cF+Ra5c0/GP2jQL3QG/jnaASbK34WElqNIt3dQd4E8+ffsU0Tk=) 2025-05-19 21:34:03.846638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOR4AGhs6B/aD5u0MJIdw7IhUY9BX/Qj6Zsm3UbT+ZTU) 2025-05-19 21:34:03.846909 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfFOCjgETpN7Exv4fVzhWI0qY/zTKIMaoBN6RMF8qx3yZowVxV+bt3xmq/i1rxx/LVACIUj0sj9G2xWi0ypfUE=) 2025-05-19 21:34:03.847779 | orchestrator | 2025-05-19 21:34:03.848834 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-19 21:34:03.849658 | orchestrator | Monday 19 May 2025 21:34:03 +0000 (0:00:01.017) 0:00:13.291 ************ 2025-05-19 21:34:09.145750 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-19 21:34:09.145930 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-19 21:34:09.146652 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-19 21:34:09.147305 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-19 21:34:09.148208 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-19 21:34:09.148921 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-19 21:34:09.149570 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-19 21:34:09.150915 | orchestrator | 2025-05-19 21:34:09.152110 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-19 21:34:09.152725 | orchestrator | Monday 19 May 2025 21:34:09 +0000 (0:00:05.298) 0:00:18.590 ************ 2025-05-19 21:34:09.312413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-19 21:34:09.312734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-19 21:34:09.313963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-19 21:34:09.315049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-19 21:34:09.316083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-19 21:34:09.316629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-19 21:34:09.317556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-19 21:34:09.318583 | orchestrator | 2025-05-19 21:34:09.319035 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:09.319646 | orchestrator | Monday 19 May 2025 21:34:09 +0000 (0:00:00.168) 0:00:18.759 ************ 2025-05-19 21:34:10.329562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF3F67PsjqRFtyrzHA2027ejP1/ReUltHQZ4HEXJPtmr) 2025-05-19 21:34:10.330253 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEkbkFt6Ln0GzZ3q/PJ59IEBCLPEnjg/7RebrQQzS+Wl/+zqCfC+YRWYSFVJ7dbsTQC7qH2ln2gR2MwpJyNvCzDQsHZ+QMxeXlMx6MSrnGAyodXnamw59EoFT2moW06xiKM9InBkDY4fbkbNgm6iklF/ko9fG0lKNydyJm/naiDeCMDB68/RCWGOR15AsRJY/nUPGYWNgXujq2lmPZTwbe4fRIomSNcc937qxafvbDGc9Opq5I035IxQ3rF4iQfHjBIKOiM8V+9dPltBXpmZ3O4MXtHgKVrYhCVsBGaxOWDzglCBrLcdMtFz5j0640EtvX0IVSsg8zP8LTMxac1Pnrfpc6t0E5OtQJEUPNsPYDvWZnmMfEMKz7HDHVp8Exi0H5rdDjN615K105Kk4I3pRFmbezB/ZUf7i8ifMKOh4nsyPJAlXPYMNH/wtAVBwGQGhI9myWI52EZ0G2jEJXIyqIWGVm4utNxTzOlRDLueDkYDJEH/H/wjOwoAPaUvi0oWk=) 2025-05-19 21:34:10.330788 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC0ltcgCWvKGie5OEJkBqCpINWqMIJMh+l6FbOkCbyCwbRCgLO34nGn7WFjxjyIVvrxEBIcEiGZRq2lZs9Ccb2Q=) 2025-05-19 21:34:10.331679 | orchestrator | 2025-05-19 21:34:10.332012 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:10.332803 | orchestrator | Monday 19 May 2025 21:34:10 +0000 (0:00:01.016) 0:00:19.775 ************ 2025-05-19 21:34:11.368561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSCdwctH0jcNVs2N8i0dAkYYvyc5wnkoiJCAVVAB2VxZcJJ8KugV+RuSk6oPU8hQF+k06KMNXW1FkNwKUEDU8Y=) 2025-05-19 21:34:11.369967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCIaFSs/422oym+DNQTMCrVYr7q5xCarVfPrWosv9NgLs/PQ9XbI0uScNVL8w1h7CVnscageprAWVg8/txPQkGSWEtElmQOblWWZT3H7TKnKV4ExtoQaGXOLT6lNhMTlfYddDpZu+zpagZx3EwY8XaQnTwF5OLdUNqKyaG/GzF7LcavudjCUNXGW17oQALUSRmWbZqOWNkHiz6HKbIiAqKXhHC1xDGqkyKcRBwnszsHZrZHqBIYrNP8L87/o9skuOj7TJK1Qw1hybFyrz20o17+HxRSw7XX6mZVg8ssc4ZYUCbvoRzFhBvvuyPc50P+DNli8GgrLzBugKvmwz66+gX6a49RyMNbWSPlz3TPcrZ2oyI56Ut3DrcPpgtyZGoY4MJllFxyqu3f0ncYkOQ9hdBKHjd8rowFGoO0QIBqdyZUcMrHzkjisukilqOEGs/B3FU3kIEEopknSv2IYAohePLeLRSNnHY0tzvsor974vCZboanoGkpjwJc+t0/37b7b8s=) 2025-05-19 21:34:11.370657 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIHudDtz3nNYBcjQER+YMH10Nv14PCgureW5l3Mlzxvs) 2025-05-19 21:34:11.370928 | orchestrator | 2025-05-19 21:34:11.371539 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:11.371976 | orchestrator | Monday 19 May 2025 21:34:11 +0000 (0:00:01.039) 0:00:20.814 ************ 2025-05-19 21:34:12.405414 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO99S6k+9ihh4+dLyIg4lCklL3jOfSESzN01Q+mLSigS) 2025-05-19 21:34:12.406607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO5qhD2I3n9+NKVXo13Te9nXkvLWozWepBaEjQs1EAjsApDKVrHEVuAUC1FQOFLO0TkV5QgpdgPFzXlxEPKYg2YArq0GE67192wAstYbKUxIqQeT4Ya78ZSB6Hp6ry+IJCRHhdpogVrz3yn6uOlTnXcEqUPb0euaY2gVH4c+3S1P6TrkXGFiZxNaRSD/TbuLx8NyaG9HcCQQmeFV+OawtLOri21P9T86ooSGP75ros0qfAp5AKx58iXITUtInet8Uosq8uDQSnB8T08MAxDWtehw8DPVq6M32JFk0jK8m/toNGnJPQucDZ02jwRKh9YBw39/Cvk8J3eqHKL2+o5N9E8lSlNxCNU/ft03OUkQkn+ybXApKdPzVC3Trd2iThhRohlBbx2iT97Z2ZtCvMVwe5kuKXDFBewYbbZn7vSZsqNiWa3bWDLs7zeOjuOrJ4K7zRyr3gAXnn1u9m3V1/LEIQ6G4fGg9lCsBPobuxCndwfGcCJ8paep3q49l81ZFTd4k=) 2025-05-19 21:34:12.407416 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTEvFgxaGX0sJjp+BQu5AMGHf2jPQW0QWLKbr7SaGv9nCKl/HYqor68WuYajkSawl4hFMm/Cym3UiAoAgvbQYk=) 2025-05-19 21:34:12.408112 | orchestrator | 2025-05-19 21:34:12.408781 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:12.409330 | orchestrator | Monday 19 May 2025 21:34:12 +0000 (0:00:01.036) 0:00:21.850 ************ 2025-05-19 21:34:13.432418 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA0hUIBgLF6x2VwFStGS+NATNpVz9xO5raDIXJ/mymoKIkMO+NFwtgEa060/qpxzdbdJUiN9Ro4FyMAsD8h/9Ng=) 2025-05-19 21:34:13.436066 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPrbHXev2E2LSuPsPeRvL6LWxUrwY+MvChC64+bFUUng) 2025-05-19 21:34:13.436251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDScS8VDtnNFs4lv96LybZfqhfLCClQcanKKECmp4YfQCSp0/zS8+vqPmuROS25x1KgEyobEuSQLGpllTax//eQfoluurhlyEevGa8A9BCxMpmnUIU7VKj8fB0qTATDQJvSOUQWT5TeGFZPczEeIFlZMmwIlllK44AjLgiEljUWQ+YyO0lk4vZjoE3xiiJJwdojhU9pMtuMuNou3MQzmVkE7zozU6qJ3T/xgjU5KhudnFBPcc+LDsSAfyGN4OZF9JqZ3Yh1iVlgtj7M90mYbvEI/HjUzm9gu4qnhWhxe6xHXk2NGYE9UHxMfT78oSOspkdJGrycETRcvmfqoY4Uk0/Njiv/TKguVjm76Co6vJ9pYExUwjBfIeQxshWvWXhjgaNiF5B4AwCpNBwCZUXuR+CKUSg9lr+V8YbEnGI0rjm5gvan82LV2WVlfPWystGLpdV+QwzfsobxWEAOYw1VKF/iksze0vB/0Uy+og5JOTx5xS9LGvO1WOdDDwFaxrqiT58=) 2025-05-19 21:34:13.437089 | orchestrator | 2025-05-19 21:34:13.437574 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:13.438259 | orchestrator | Monday 19 May 2025 21:34:13 +0000 (0:00:01.027) 0:00:22.878 ************ 2025-05-19 21:34:14.486726 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBbiw+Y+5PtuxMKRLcVZCf6hh65Zdz7KIiI8uU4aZtYDuhPpQ/JlJaHnHqNOdwlfC0LQrzn89CLVE4nPqybles4=) 2025-05-19 21:34:14.487106 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCB5EoJw9IHoQRPNvvfKYz9bLdM/HYDHaAhr3evtliDiQMktYtv9z4St9AtXoVI2zN+3HhWcoCL6mp0k8lKEPMS5fxS1HkFYpvs/GpkFFY0IiNtEG3wnSUV2X1huBdfV7dk27pCZprzZ/bMMq47DKne0R1CtKlX4aUbMCDD/Vwq0NEvv421TlUq16rPMlfSKAJ6NldREvIDSLhvVyLdjGCu981aNzIt3n0lOGqUP8/ZiI7ZYzsQivF6+xS/KMnRnXJy9UxISwfZtezxbyVoIqkkS4OfGicScsdQE/4yp0WwiVx2dLVoMAne92SNnOIuPXly/VuHvnnyE6utut4V/03lziCk2Z+TMhlGaLLJhC034TtduxAPqaN4vmTjFJ0ozmxe7Xjdy5WtudptCbss5z1q6sMyivSNwqm/Pwz2Xo5NQCGYMocEl/kPr1IMQkCd/dJwOxwpc4bV0aHiZ1Vf0d0p224oMIVXt8nPrcmFW8sp+wZXYpRveHJUk64y3sbRk0M=) 2025-05-19 21:34:14.488162 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBE2uc4D9WzYcTJ4Ich7nFvrrPT1R+pp/nNgBHbb9K14) 2025-05-19 21:34:14.489221 | orchestrator | 2025-05-19 21:34:14.490447 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:14.491651 | orchestrator | Monday 19 May 2025 21:34:14 +0000 (0:00:01.049) 0:00:23.928 ************ 2025-05-19 21:34:15.554368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7GlzQ5C6gLxiDYCy8rqDcJDqNZhoVClPYqgPWzlElNgk8feXDGVXgPv2A6/twiMMZbkILibUDmTIi8dMz6lXnvihEmX+si2vtjClxUaDm5dYiyqWFpruavn3yGdtNaMU/XjW0bOv0IYH+sEA3jvEHW3DTbXxGpefUwdqsVbsIIrV+Ev7Urewdl4NvIO5Q6N9tUHMi+VHJSW2wdeK6Hm4nSbFgskxhE8F032Juf2Ec9YJp9k17uPzNdjosG7i1LgCpZ+fakOuN00J7O0xSN35J0wdFQKmaYYbmFm/OYIljWNaqm//N45BSBXX/zQk1NXglsb1z+W7AD91beGLxILNynuQ8VW4B4XMoeBa5JUDgWHt+ZLAC/tBtvBtKrlx6Tz4Fpq13xgJHODda0/kbuDR5rJ2s77ZkfqficqTxsLq0kH6RbGT0Rxj4vkA048wA5B9MmhHiC9BNsN9ek495F5QfY6bGTgIb7KErvsYzW0YVgkrSVwWk8zaotJmtSijxazU=) 2025-05-19 21:34:15.554885 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJhDfJfWg6CzUAhmfCsm32qI7cN1O21m5PfL7Djr58byrVpDfkcw8ZL+4sClGWovdRuK8c9EYvP/RiGwwFORkA=) 2025-05-19 21:34:15.555581 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEEvHfEQtEjlrXKt37BLwQg2vxA3vWWJZhcm6ZWJ16Vm) 2025-05-19 21:34:15.556197 | orchestrator | 2025-05-19 21:34:15.556995 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-19 21:34:15.557921 | orchestrator | Monday 19 May 2025 21:34:15 +0000 (0:00:01.071) 0:00:25.000 ************ 2025-05-19 21:34:16.571768 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOR4AGhs6B/aD5u0MJIdw7IhUY9BX/Qj6Zsm3UbT+ZTU) 2025-05-19 21:34:16.571896 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDehh0xHF5g68A/8CF4aZFQh9Yg7a3W6pB70AzUdoZrPlUwhZ0GOWLfWNR7hS0f9ydzAYjMz8RV31pxYJWC+gxbPhnKzHgt6XmB90BBlUJLAO6G3W7rImQZkqc7VTQJ0rFbLKGBnqsAqs4tKEILoHe/kqjigCd9WjxjYhxubsxeHtOHS+lb/CRnjAmpADhJqMwO8l7IYC90S6+s0gNizt/RJX0y+q0cn//W5KzTBQHpfrlpNESQ6yakqmgHAj+LwwkFEH+1kYYnhOvBzaEeiYPRsHVcvBcrCbXJ2vP5vVTIY8ArG1cxc8EWjcSWyIJ3K+oGPIm2lfVZjgANtk2B8DB0O6UmhB+k4yfT2ua/HqOE6h7/m5jYTc0DU7zpQWIjZOIk3nZSkCcvwxaNRmPo1IZlECsk6vj2OQNS99jRrz5omH3z4NPESgI/SttrpUBpl7UOLOr49XE2/ps85cF+Ra5c0/GP2jQL3QG/jnaASbK34WElqNIt3dQd4E8+ffsU0Tk=) 2025-05-19 21:34:16.572092 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNfFOCjgETpN7Exv4fVzhWI0qY/zTKIMaoBN6RMF8qx3yZowVxV+bt3xmq/i1rxx/LVACIUj0sj9G2xWi0ypfUE=) 2025-05-19 21:34:16.572657 | orchestrator | 2025-05-19 21:34:16.573571 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-19 21:34:16.573947 | orchestrator | Monday 19 May 2025 21:34:16 +0000 (0:00:01.017) 0:00:26.017 ************ 2025-05-19 21:34:16.858209 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 21:34:16.858310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 21:34:16.860114 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 21:34:16.861494 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 21:34:16.862344 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 21:34:16.863125 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 21:34:16.863860 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 21:34:16.864473 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:34:16.864847 | orchestrator | 2025-05-19 21:34:16.865525 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-19 21:34:16.865949 | orchestrator | Monday 19 May 2025 21:34:16 +0000 (0:00:00.287) 0:00:26.304 ************ 2025-05-19 21:34:17.005206 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:34:17.005368 | orchestrator | 2025-05-19 21:34:17.006168 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-19 21:34:17.006607 | orchestrator | Monday 19 May 2025 21:34:16 +0000 (0:00:00.147) 0:00:26.452 ************ 2025-05-19 21:34:17.071433 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:34:17.073711 | orchestrator | 2025-05-19 21:34:17.073742 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-19 21:34:17.074048 | orchestrator | Monday 19 May 2025 21:34:17 +0000 (0:00:00.066) 0:00:26.518 ************ 2025-05-19 21:34:17.585030 | orchestrator | changed: [testbed-manager] 2025-05-19 21:34:17.585907 | orchestrator | 2025-05-19 21:34:17.585943 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:34:17.585955 | orchestrator | 2025-05-19 21:34:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:34:17.585965 | orchestrator | 2025-05-19 21:34:17 | INFO  | Please wait and do not abort execution. 2025-05-19 21:34:17.586612 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 21:34:17.586718 | orchestrator | 2025-05-19 21:34:17.587540 | orchestrator | 2025-05-19 21:34:17.587835 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:34:17.588217 | orchestrator | Monday 19 May 2025 21:34:17 +0000 (0:00:00.512) 0:00:27.031 ************ 2025-05-19 21:34:17.588328 | orchestrator | =============================================================================== 2025-05-19 21:34:17.588694 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.64s 2025-05-19 21:34:17.589571 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2025-05-19 21:34:17.589821 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-05-19 21:34:17.590103 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-05-19 21:34:17.590625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-19 21:34:17.590986 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-19 21:34:17.591312 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 21:34:17.591841 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 21:34:17.592097 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 21:34:17.592720 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-19 21:34:17.593048 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-19 21:34:17.593352 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-19 21:34:17.593711 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-19 21:34:17.593953 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-19 21:34:17.594298 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-19 21:34:17.594613 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-19 21:34:17.595074 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-05-19 21:34:17.595324 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.29s 2025-05-19 21:34:17.595639 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-19 21:34:17.595715 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-19 21:34:17.987312 | orchestrator | + osism apply squid 2025-05-19 21:34:19.590136 | orchestrator | 2025-05-19 21:34:19 | INFO  | Task ca7831ef-1730-4298-831e-7645c4d58799 (squid) was prepared for execution. 2025-05-19 21:34:19.590253 | orchestrator | 2025-05-19 21:34:19 | INFO  | It takes a moment until task ca7831ef-1730-4298-831e-7645c4d58799 (squid) has been started and output is visible here. 2025-05-19 21:34:23.416707 | orchestrator | 2025-05-19 21:34:23.417909 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-19 21:34:23.418492 | orchestrator | 2025-05-19 21:34:23.418521 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-19 21:34:23.419747 | orchestrator | Monday 19 May 2025 21:34:23 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-19 21:34:23.507097 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:34:23.507906 | orchestrator | 2025-05-19 21:34:23.508577 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-19 21:34:23.509348 | orchestrator | Monday 19 May 2025 21:34:23 +0000 (0:00:00.091) 0:00:00.259 ************ 2025-05-19 21:34:24.872901 | orchestrator | ok: [testbed-manager] 2025-05-19 21:34:24.873167 | orchestrator | 2025-05-19 21:34:24.874128 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-19 21:34:24.874472 | orchestrator | Monday 19 May 2025 21:34:24 +0000 (0:00:01.363) 0:00:01.622 ************ 2025-05-19 21:34:25.967688 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-19 21:34:25.968123 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-19 21:34:25.969137 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-19 21:34:25.970101 | orchestrator | 2025-05-19 21:34:25.970853 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-19 21:34:25.971521 | orchestrator | Monday 19 May 2025 21:34:25 +0000 (0:00:01.094) 0:00:02.717 ************ 2025-05-19 21:34:27.023529 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-19 21:34:27.023649 | orchestrator | 2025-05-19 21:34:27.023679 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-19 21:34:27.023692 | orchestrator | Monday 19 May 2025 21:34:27 +0000 (0:00:01.053) 0:00:03.770 ************ 2025-05-19 21:34:27.361209 | orchestrator | ok: [testbed-manager] 2025-05-19 21:34:27.361333 | orchestrator | 2025-05-19 21:34:27.362056 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-19 21:34:27.363200 | orchestrator | Monday 19 May 2025 21:34:27 +0000 (0:00:00.340) 0:00:04.111 ************ 2025-05-19 21:34:28.240877 | orchestrator | changed: [testbed-manager] 2025-05-19 21:34:28.241029 | orchestrator | 2025-05-19 21:34:28.241835 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-19 21:34:28.242678 | orchestrator | Monday 19 May 2025 21:34:28 +0000 (0:00:00.879) 0:00:04.991 ************ 2025-05-19 21:34:59.588359 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-19 21:34:59.588558 | orchestrator | ok: [testbed-manager] 2025-05-19 21:34:59.588583 | orchestrator | 2025-05-19 21:34:59.588596 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-19 21:34:59.588894 | orchestrator | Monday 19 May 2025 21:34:59 +0000 (0:00:31.344) 0:00:36.335 ************ 2025-05-19 21:35:11.769139 | orchestrator | changed: [testbed-manager] 2025-05-19 21:35:11.769288 | orchestrator | 2025-05-19 21:35:11.769306 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-19 21:35:11.769320 | orchestrator | Monday 19 May 2025 21:35:11 +0000 (0:00:12.181) 0:00:48.517 ************ 2025-05-19 21:36:11.862957 | orchestrator | Pausing for 60 seconds 2025-05-19 21:36:11.864494 | orchestrator | changed: [testbed-manager] 2025-05-19 21:36:11.864526 | orchestrator | 2025-05-19 21:36:11.864536 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-19 21:36:11.864548 | orchestrator | Monday 19 May 2025 21:36:11 +0000 (0:01:00.083) 0:01:48.600 ************ 2025-05-19 21:36:11.910105 | orchestrator | ok: [testbed-manager] 2025-05-19 21:36:11.910427 | orchestrator | 2025-05-19 21:36:11.911641 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-19 21:36:11.911749 | orchestrator | Monday 19 May 2025 21:36:11 +0000 (0:00:00.060) 0:01:48.661 ************ 2025-05-19 21:36:12.519624 | orchestrator | changed: [testbed-manager] 2025-05-19 21:36:12.521971 | orchestrator | 2025-05-19 21:36:12.522063 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:36:12.522378 | orchestrator | 2025-05-19 21:36:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:36:12.522407 | orchestrator | 2025-05-19 21:36:12 | INFO  | Please wait and do not abort execution. 2025-05-19 21:36:12.526494 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:36:12.526983 | orchestrator | 2025-05-19 21:36:12.527592 | orchestrator | 2025-05-19 21:36:12.527882 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:36:12.528602 | orchestrator | Monday 19 May 2025 21:36:12 +0000 (0:00:00.609) 0:01:49.271 ************ 2025-05-19 21:36:12.531638 | orchestrator | =============================================================================== 2025-05-19 21:36:12.531989 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-19 21:36:12.532717 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.34s 2025-05-19 21:36:12.532875 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.18s 2025-05-19 21:36:12.533482 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.36s 2025-05-19 21:36:12.534383 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.09s 2025-05-19 21:36:12.534763 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.05s 2025-05-19 21:36:12.535527 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-05-19 21:36:12.536959 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-05-19 21:36:12.537572 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.34s 2025-05-19 21:36:12.541313 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-05-19 21:36:12.541565 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-19 21:36:13.025841 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 21:36:13.027441 | orchestrator | ++ semver latest 9.0.0 2025-05-19 21:36:13.077223 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-19 21:36:13.077326 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 21:36:13.077404 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-19 21:36:14.755855 | orchestrator | 2025-05-19 21:36:14 | INFO  | Task e6a37578-7dc5-482a-b544-9d7d2639e704 (operator) was prepared for execution. 2025-05-19 21:36:14.755980 | orchestrator | 2025-05-19 21:36:14 | INFO  | It takes a moment until task e6a37578-7dc5-482a-b544-9d7d2639e704 (operator) has been started and output is visible here. 2025-05-19 21:36:18.587476 | orchestrator | 2025-05-19 21:36:18.588858 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-19 21:36:18.590420 | orchestrator | 2025-05-19 21:36:18.591129 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 21:36:18.591922 | orchestrator | Monday 19 May 2025 21:36:18 +0000 (0:00:00.117) 0:00:00.117 ************ 2025-05-19 21:36:21.842684 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:36:21.842831 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:36:21.843785 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:36:21.844113 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:21.845123 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:21.845564 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:21.846289 | orchestrator | 2025-05-19 21:36:21.846716 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-19 21:36:21.847044 | orchestrator | Monday 19 May 2025 21:36:21 +0000 (0:00:03.257) 0:00:03.374 ************ 2025-05-19 21:36:22.595204 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:36:22.596197 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:36:22.597185 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:36:22.598097 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:22.599414 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:22.599435 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:22.599970 | orchestrator | 2025-05-19 21:36:22.600699 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-19 21:36:22.601412 | orchestrator | 2025-05-19 21:36:22.602160 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-19 21:36:22.602477 | orchestrator | Monday 19 May 2025 21:36:22 +0000 (0:00:00.752) 0:00:04.127 ************ 2025-05-19 21:36:22.655572 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:36:22.675074 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:36:22.687486 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:36:22.718367 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:22.718423 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:22.719612 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:22.720387 | orchestrator | 2025-05-19 21:36:22.721202 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-19 21:36:22.722206 | orchestrator | Monday 19 May 2025 21:36:22 +0000 (0:00:00.123) 0:00:04.251 ************ 2025-05-19 21:36:22.788551 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:36:22.809994 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:36:22.849442 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:36:22.849631 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:22.850521 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:22.851625 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:22.851649 | orchestrator | 2025-05-19 21:36:22.852251 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-19 21:36:22.853011 | orchestrator | Monday 19 May 2025 21:36:22 +0000 (0:00:00.130) 0:00:04.381 ************ 2025-05-19 21:36:23.444517 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:23.448501 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:23.448576 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:23.448591 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:23.449673 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:23.449696 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:23.450580 | orchestrator | 2025-05-19 21:36:23.451434 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-19 21:36:23.452129 | orchestrator | Monday 19 May 2025 21:36:23 +0000 (0:00:00.594) 0:00:04.976 ************ 2025-05-19 21:36:24.222352 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:24.222466 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:24.223494 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:24.224517 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:24.225338 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:24.226241 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:24.226677 | orchestrator | 2025-05-19 21:36:24.227172 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-19 21:36:24.227914 | orchestrator | Monday 19 May 2025 21:36:24 +0000 (0:00:00.772) 0:00:05.748 ************ 2025-05-19 21:36:25.392926 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-19 21:36:25.393006 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-19 21:36:25.393013 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-19 21:36:25.393938 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-19 21:36:25.395718 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-19 21:36:25.395755 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-19 21:36:25.396690 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-19 21:36:25.397687 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-19 21:36:25.398489 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-19 21:36:25.400265 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-19 21:36:25.400292 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-19 21:36:25.400724 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-19 21:36:25.401432 | orchestrator | 2025-05-19 21:36:25.401646 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-19 21:36:25.402342 | orchestrator | Monday 19 May 2025 21:36:25 +0000 (0:00:01.170) 0:00:06.919 ************ 2025-05-19 21:36:26.673377 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:26.673479 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:26.674503 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:26.674978 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:26.677286 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:26.677603 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:26.677846 | orchestrator | 2025-05-19 21:36:26.678946 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-19 21:36:26.681566 | orchestrator | Monday 19 May 2025 21:36:26 +0000 (0:00:01.260) 0:00:08.179 ************ 2025-05-19 21:36:27.853181 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-19 21:36:27.855320 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-19 21:36:27.855389 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-19 21:36:27.908575 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.909112 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.909860 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.910393 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.911399 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.912179 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-19 21:36:27.912989 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.913636 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.914303 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.915069 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.915348 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.916159 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-19 21:36:27.916451 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.916889 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.917194 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.917674 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.918147 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.918411 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-19 21:36:27.918834 | orchestrator | 2025-05-19 21:36:27.919232 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-19 21:36:27.920929 | orchestrator | Monday 19 May 2025 21:36:27 +0000 (0:00:01.260) 0:00:09.439 ************ 2025-05-19 21:36:28.491049 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:28.491833 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:28.493559 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:28.496424 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:28.496477 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:28.496534 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:28.497397 | orchestrator | 2025-05-19 21:36:28.498245 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-19 21:36:28.498879 | orchestrator | Monday 19 May 2025 21:36:28 +0000 (0:00:00.582) 0:00:10.022 ************ 2025-05-19 21:36:28.567609 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:36:28.594579 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:36:28.628109 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:36:28.676272 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:28.676455 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:28.676949 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:28.680007 | orchestrator | 2025-05-19 21:36:28.680891 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-19 21:36:28.681631 | orchestrator | Monday 19 May 2025 21:36:28 +0000 (0:00:00.185) 0:00:10.207 ************ 2025-05-19 21:36:29.396634 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 21:36:29.396799 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 21:36:29.397192 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:29.397934 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:29.398414 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 21:36:29.401038 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 21:36:29.401781 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:29.402224 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:29.402842 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 21:36:29.405671 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 21:36:29.406092 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:29.407708 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:29.408516 | orchestrator | 2025-05-19 21:36:29.409332 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-19 21:36:29.410108 | orchestrator | Monday 19 May 2025 21:36:29 +0000 (0:00:00.717) 0:00:10.924 ************ 2025-05-19 21:36:29.442165 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:36:29.461898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:36:29.480681 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:36:29.498886 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:29.535037 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:29.535189 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:29.536248 | orchestrator | 2025-05-19 21:36:29.536334 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-19 21:36:29.537436 | orchestrator | Monday 19 May 2025 21:36:29 +0000 (0:00:00.142) 0:00:11.067 ************ 2025-05-19 21:36:29.591629 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:36:29.644731 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:36:29.674951 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:36:29.719922 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:29.721554 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:29.721695 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:29.723683 | orchestrator | 2025-05-19 21:36:29.724738 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-19 21:36:29.725761 | orchestrator | Monday 19 May 2025 21:36:29 +0000 (0:00:00.182) 0:00:11.250 ************ 2025-05-19 21:36:29.796689 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:36:29.824486 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:36:29.844100 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:36:29.888062 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:29.890426 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:29.890472 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:29.891366 | orchestrator | 2025-05-19 21:36:29.892295 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-19 21:36:29.893564 | orchestrator | Monday 19 May 2025 21:36:29 +0000 (0:00:00.167) 0:00:11.418 ************ 2025-05-19 21:36:30.546809 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:30.547321 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:30.547725 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:30.548818 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:30.549700 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:30.550467 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:30.551604 | orchestrator | 2025-05-19 21:36:30.552401 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-19 21:36:30.552930 | orchestrator | Monday 19 May 2025 21:36:30 +0000 (0:00:00.659) 0:00:12.077 ************ 2025-05-19 21:36:30.637323 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:36:30.668018 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:36:30.763135 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:36:30.763263 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:30.763746 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:30.764936 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:30.765226 | orchestrator | 2025-05-19 21:36:30.767945 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:36:30.770152 | orchestrator | 2025-05-19 21:36:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:36:30.770597 | orchestrator | 2025-05-19 21:36:30 | INFO  | Please wait and do not abort execution. 2025-05-19 21:36:30.771589 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.772450 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.773394 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.773957 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.774659 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.775418 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:36:30.776155 | orchestrator | 2025-05-19 21:36:30.776857 | orchestrator | 2025-05-19 21:36:30.777648 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:36:30.778363 | orchestrator | Monday 19 May 2025 21:36:30 +0000 (0:00:00.215) 0:00:12.292 ************ 2025-05-19 21:36:30.778921 | orchestrator | =============================================================================== 2025-05-19 21:36:30.779646 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2025-05-19 21:36:30.779791 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2025-05-19 21:36:30.780555 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-05-19 21:36:30.780981 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-05-19 21:36:30.781600 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-05-19 21:36:30.782110 | orchestrator | Do not require tty for all users ---------------------------------------- 0.75s 2025-05-19 21:36:30.782539 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2025-05-19 21:36:30.783085 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2025-05-19 21:36:30.783558 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-05-19 21:36:30.784093 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-05-19 21:36:30.784809 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-05-19 21:36:30.785412 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-05-19 21:36:30.786106 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-05-19 21:36:30.786329 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-05-19 21:36:30.786973 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-19 21:36:30.787353 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-05-19 21:36:30.788705 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.12s 2025-05-19 21:36:31.212823 | orchestrator | + osism apply --environment custom facts 2025-05-19 21:36:32.827071 | orchestrator | 2025-05-19 21:36:32 | INFO  | Trying to run play facts in environment custom 2025-05-19 21:36:32.883748 | orchestrator | 2025-05-19 21:36:32 | INFO  | Task c3ef53b4-c687-4bfa-9ff1-fd9f57e4c221 (facts) was prepared for execution. 2025-05-19 21:36:32.883855 | orchestrator | 2025-05-19 21:36:32 | INFO  | It takes a moment until task c3ef53b4-c687-4bfa-9ff1-fd9f57e4c221 (facts) has been started and output is visible here. 2025-05-19 21:36:36.688044 | orchestrator | 2025-05-19 21:36:36.688646 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-19 21:36:36.694869 | orchestrator | 2025-05-19 21:36:36.694924 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 21:36:36.695956 | orchestrator | Monday 19 May 2025 21:36:36 +0000 (0:00:00.089) 0:00:00.089 ************ 2025-05-19 21:36:38.052512 | orchestrator | ok: [testbed-manager] 2025-05-19 21:36:38.054508 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:38.055644 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:38.056506 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:38.056918 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:38.057502 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:38.059577 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:38.059655 | orchestrator | 2025-05-19 21:36:38.059681 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-19 21:36:38.059700 | orchestrator | Monday 19 May 2025 21:36:38 +0000 (0:00:01.359) 0:00:01.448 ************ 2025-05-19 21:36:39.296925 | orchestrator | ok: [testbed-manager] 2025-05-19 21:36:39.297045 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:39.297592 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:36:39.300233 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:36:39.300859 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:39.301412 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:39.302243 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:36:39.302954 | orchestrator | 2025-05-19 21:36:39.303696 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-19 21:36:39.304573 | orchestrator | 2025-05-19 21:36:39.305202 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 21:36:39.306246 | orchestrator | Monday 19 May 2025 21:36:39 +0000 (0:00:01.250) 0:00:02.699 ************ 2025-05-19 21:36:39.415008 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:39.415232 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:39.416039 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:39.416751 | orchestrator | 2025-05-19 21:36:39.417221 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 21:36:39.417785 | orchestrator | Monday 19 May 2025 21:36:39 +0000 (0:00:00.119) 0:00:02.818 ************ 2025-05-19 21:36:39.615914 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:39.616004 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:39.616116 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:39.616883 | orchestrator | 2025-05-19 21:36:39.617957 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 21:36:39.618964 | orchestrator | Monday 19 May 2025 21:36:39 +0000 (0:00:00.201) 0:00:03.020 ************ 2025-05-19 21:36:39.798445 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:39.798642 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:39.799933 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:39.800756 | orchestrator | 2025-05-19 21:36:39.801397 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 21:36:39.802220 | orchestrator | Monday 19 May 2025 21:36:39 +0000 (0:00:00.180) 0:00:03.200 ************ 2025-05-19 21:36:39.928652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:36:39.928793 | orchestrator | 2025-05-19 21:36:39.930475 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 21:36:39.930563 | orchestrator | Monday 19 May 2025 21:36:39 +0000 (0:00:00.132) 0:00:03.332 ************ 2025-05-19 21:36:40.449882 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:40.450670 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:40.454079 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:40.454189 | orchestrator | 2025-05-19 21:36:40.454208 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 21:36:40.454222 | orchestrator | Monday 19 May 2025 21:36:40 +0000 (0:00:00.521) 0:00:03.854 ************ 2025-05-19 21:36:40.562935 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:40.563383 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:40.563992 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:40.564298 | orchestrator | 2025-05-19 21:36:40.564732 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 21:36:40.565225 | orchestrator | Monday 19 May 2025 21:36:40 +0000 (0:00:00.113) 0:00:03.967 ************ 2025-05-19 21:36:41.623676 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:41.623890 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:41.625551 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:41.627213 | orchestrator | 2025-05-19 21:36:41.627905 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 21:36:41.628550 | orchestrator | Monday 19 May 2025 21:36:41 +0000 (0:00:01.058) 0:00:05.026 ************ 2025-05-19 21:36:42.078890 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:36:42.079097 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:36:42.079256 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:36:42.079868 | orchestrator | 2025-05-19 21:36:42.080037 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 21:36:42.080637 | orchestrator | Monday 19 May 2025 21:36:42 +0000 (0:00:00.453) 0:00:05.479 ************ 2025-05-19 21:36:43.123794 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:43.124583 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:43.125492 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:43.127669 | orchestrator | 2025-05-19 21:36:43.128360 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 21:36:43.129253 | orchestrator | Monday 19 May 2025 21:36:43 +0000 (0:00:01.046) 0:00:06.526 ************ 2025-05-19 21:36:56.314942 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:36:56.315064 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:36:56.315072 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:36:56.315077 | orchestrator | 2025-05-19 21:36:56.315082 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-19 21:36:56.315087 | orchestrator | Monday 19 May 2025 21:36:56 +0000 (0:00:13.187) 0:00:19.714 ************ 2025-05-19 21:36:56.368182 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:36:56.398946 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:36:56.400893 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:36:56.401708 | orchestrator | 2025-05-19 21:36:56.403442 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-19 21:36:56.404726 | orchestrator | Monday 19 May 2025 21:36:56 +0000 (0:00:00.089) 0:00:19.803 ************ 2025-05-19 21:37:03.682328 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:03.682429 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:03.683285 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:03.683667 | orchestrator | 2025-05-19 21:37:03.685241 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-19 21:37:03.686429 | orchestrator | Monday 19 May 2025 21:37:03 +0000 (0:00:07.280) 0:00:27.084 ************ 2025-05-19 21:37:04.100275 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:04.100367 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:04.100958 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:04.102295 | orchestrator | 2025-05-19 21:37:04.102846 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-19 21:37:04.106386 | orchestrator | Monday 19 May 2025 21:37:04 +0000 (0:00:00.420) 0:00:27.504 ************ 2025-05-19 21:37:07.609811 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-19 21:37:07.609921 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-19 21:37:07.611649 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-19 21:37:07.612005 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-19 21:37:07.613950 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-19 21:37:07.615547 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-19 21:37:07.617680 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-19 21:37:07.617766 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-19 21:37:07.618246 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-19 21:37:07.618833 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-19 21:37:07.619477 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-19 21:37:07.620188 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-19 21:37:07.620877 | orchestrator | 2025-05-19 21:37:07.622260 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 21:37:07.622305 | orchestrator | Monday 19 May 2025 21:37:07 +0000 (0:00:03.507) 0:00:31.012 ************ 2025-05-19 21:37:08.920680 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:08.921245 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:08.925075 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:08.925125 | orchestrator | 2025-05-19 21:37:08.925139 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 21:37:08.925153 | orchestrator | 2025-05-19 21:37:08.925496 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:37:08.926714 | orchestrator | Monday 19 May 2025 21:37:08 +0000 (0:00:01.311) 0:00:32.323 ************ 2025-05-19 21:37:12.689453 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:12.689572 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:12.690269 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:12.690765 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:12.691505 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:12.691937 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:12.692608 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:12.693735 | orchestrator | 2025-05-19 21:37:12.694421 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:37:12.695035 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:37:12.695146 | orchestrator | 2025-05-19 21:37:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:37:12.695162 | orchestrator | 2025-05-19 21:37:12 | INFO  | Please wait and do not abort execution. 2025-05-19 21:37:12.696040 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:37:12.696939 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:37:12.697867 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:37:12.698685 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:37:12.699305 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:37:12.699938 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:37:12.700427 | orchestrator | 2025-05-19 21:37:12.700945 | orchestrator | 2025-05-19 21:37:12.701526 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:37:12.702133 | orchestrator | Monday 19 May 2025 21:37:12 +0000 (0:00:03.769) 0:00:36.093 ************ 2025-05-19 21:37:12.702564 | orchestrator | =============================================================================== 2025-05-19 21:37:12.703053 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.19s 2025-05-19 21:37:12.703664 | orchestrator | Install required packages (Debian) -------------------------------------- 7.28s 2025-05-19 21:37:12.704302 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.77s 2025-05-19 21:37:12.704790 | orchestrator | Copy fact files --------------------------------------------------------- 3.51s 2025-05-19 21:37:12.705537 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2025-05-19 21:37:12.705923 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2025-05-19 21:37:12.706566 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2025-05-19 21:37:12.706968 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-05-19 21:37:12.707615 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2025-05-19 21:37:12.708640 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.52s 2025-05-19 21:37:12.709167 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-05-19 21:37:12.709930 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-05-19 21:37:12.710461 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-05-19 21:37:12.710918 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2025-05-19 21:37:12.711509 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-19 21:37:12.711879 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-05-19 21:37:12.712304 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-05-19 21:37:12.712815 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-05-19 21:37:13.124146 | orchestrator | + osism apply bootstrap 2025-05-19 21:37:14.795737 | orchestrator | 2025-05-19 21:37:14 | INFO  | Task 7a165552-7468-47ae-8853-1222b24bbad6 (bootstrap) was prepared for execution. 2025-05-19 21:37:14.795889 | orchestrator | 2025-05-19 21:37:14 | INFO  | It takes a moment until task 7a165552-7468-47ae-8853-1222b24bbad6 (bootstrap) has been started and output is visible here. 2025-05-19 21:37:18.810499 | orchestrator | 2025-05-19 21:37:18.812218 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-19 21:37:18.812632 | orchestrator | 2025-05-19 21:37:18.813776 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-19 21:37:18.814395 | orchestrator | Monday 19 May 2025 21:37:18 +0000 (0:00:00.167) 0:00:00.167 ************ 2025-05-19 21:37:18.880159 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:18.904905 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:18.930248 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:18.954760 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:19.035854 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:19.037087 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:19.037744 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:19.038367 | orchestrator | 2025-05-19 21:37:19.039177 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 21:37:19.040240 | orchestrator | 2025-05-19 21:37:19.040878 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:37:19.042442 | orchestrator | Monday 19 May 2025 21:37:19 +0000 (0:00:00.233) 0:00:00.400 ************ 2025-05-19 21:37:22.753304 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:22.753588 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:22.754086 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:22.754510 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:22.754870 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:22.755366 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:22.755693 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:22.756153 | orchestrator | 2025-05-19 21:37:22.756539 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-19 21:37:22.756921 | orchestrator | 2025-05-19 21:37:22.757371 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:37:22.757842 | orchestrator | Monday 19 May 2025 21:37:22 +0000 (0:00:03.717) 0:00:04.117 ************ 2025-05-19 21:37:22.845885 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-19 21:37:22.847880 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-19 21:37:22.883745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-19 21:37:22.884091 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-19 21:37:22.884473 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-19 21:37:22.885137 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-19 21:37:22.885431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 21:37:22.885803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-19 21:37:22.886186 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 21:37:22.934651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-19 21:37:22.934833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 21:37:22.937439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-19 21:37:22.937537 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-19 21:37:22.937554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-19 21:37:22.937566 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 21:37:23.214140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 21:37:23.214784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-19 21:37:23.217148 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 21:37:23.217504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-19 21:37:23.218243 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:23.219069 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-19 21:37:23.220255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 21:37:23.220520 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 21:37:23.221394 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:23.222312 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-19 21:37:23.222988 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-19 21:37:23.223434 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 21:37:23.224388 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-19 21:37:23.225265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-19 21:37:23.225655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 21:37:23.226185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 21:37:23.227706 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-19 21:37:23.228526 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 21:37:23.228790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 21:37:23.229544 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:23.230245 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-19 21:37:23.230581 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 21:37:23.231427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 21:37:23.231925 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-19 21:37:23.232376 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 21:37:23.233085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 21:37:23.233472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 21:37:23.234312 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-19 21:37:23.234729 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 21:37:23.235185 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:23.236036 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 21:37:23.236252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-19 21:37:23.237161 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 21:37:23.237408 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:23.238225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 21:37:23.240487 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:23.240789 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-19 21:37:23.241597 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 21:37:23.242271 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 21:37:23.242525 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 21:37:23.244576 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:23.245376 | orchestrator | 2025-05-19 21:37:23.246079 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-19 21:37:23.246485 | orchestrator | 2025-05-19 21:37:23.247246 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-19 21:37:23.248087 | orchestrator | Monday 19 May 2025 21:37:23 +0000 (0:00:00.459) 0:00:04.577 ************ 2025-05-19 21:37:24.402528 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:24.402712 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:24.402844 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:24.405649 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:24.406936 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:24.408222 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:24.409386 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:24.410219 | orchestrator | 2025-05-19 21:37:24.411344 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-19 21:37:24.411845 | orchestrator | Monday 19 May 2025 21:37:24 +0000 (0:00:01.188) 0:00:05.765 ************ 2025-05-19 21:37:25.602833 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:25.603672 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:25.605211 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:25.606078 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:25.607483 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:25.608163 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:25.609069 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:25.609869 | orchestrator | 2025-05-19 21:37:25.610733 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-19 21:37:25.611477 | orchestrator | Monday 19 May 2025 21:37:25 +0000 (0:00:01.197) 0:00:06.963 ************ 2025-05-19 21:37:25.904472 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:25.904577 | orchestrator | 2025-05-19 21:37:25.904919 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-19 21:37:25.905685 | orchestrator | Monday 19 May 2025 21:37:25 +0000 (0:00:00.303) 0:00:07.267 ************ 2025-05-19 21:37:28.075967 | orchestrator | changed: [testbed-manager] 2025-05-19 21:37:28.076816 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:28.562205 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:28.562281 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:28.562296 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:28.562307 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:28.562318 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:28.562330 | orchestrator | 2025-05-19 21:37:28.562342 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-19 21:37:28.562355 | orchestrator | Monday 19 May 2025 21:37:28 +0000 (0:00:02.171) 0:00:09.438 ************ 2025-05-19 21:37:28.562367 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:28.562380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:28.562392 | orchestrator | 2025-05-19 21:37:28.562403 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-19 21:37:28.562414 | orchestrator | Monday 19 May 2025 21:37:28 +0000 (0:00:00.234) 0:00:09.672 ************ 2025-05-19 21:37:29.301936 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:29.302617 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:29.302647 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:29.304063 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:29.305204 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:29.306104 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:29.306811 | orchestrator | 2025-05-19 21:37:29.307859 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-19 21:37:29.308506 | orchestrator | Monday 19 May 2025 21:37:29 +0000 (0:00:00.986) 0:00:10.659 ************ 2025-05-19 21:37:29.380209 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:29.860589 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:29.861110 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:29.861875 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:29.862728 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:29.865128 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:29.866486 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:29.867379 | orchestrator | 2025-05-19 21:37:29.868022 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-19 21:37:29.868577 | orchestrator | Monday 19 May 2025 21:37:29 +0000 (0:00:00.564) 0:00:11.224 ************ 2025-05-19 21:37:29.961455 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:29.977728 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:30.005110 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:30.267339 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:30.268370 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:30.269388 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:30.270389 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:30.271288 | orchestrator | 2025-05-19 21:37:30.271860 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-19 21:37:30.272818 | orchestrator | Monday 19 May 2025 21:37:30 +0000 (0:00:00.403) 0:00:11.628 ************ 2025-05-19 21:37:30.333496 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:30.359067 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:30.378272 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:30.404739 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:30.454646 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:30.455649 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:30.456012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:30.456478 | orchestrator | 2025-05-19 21:37:30.457403 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-19 21:37:30.457849 | orchestrator | Monday 19 May 2025 21:37:30 +0000 (0:00:00.190) 0:00:11.819 ************ 2025-05-19 21:37:30.732796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:30.733076 | orchestrator | 2025-05-19 21:37:30.735071 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-19 21:37:30.735101 | orchestrator | Monday 19 May 2025 21:37:30 +0000 (0:00:00.277) 0:00:12.096 ************ 2025-05-19 21:37:31.015286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:31.015506 | orchestrator | 2025-05-19 21:37:31.016296 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-19 21:37:31.017509 | orchestrator | Monday 19 May 2025 21:37:31 +0000 (0:00:00.282) 0:00:12.378 ************ 2025-05-19 21:37:32.375587 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:32.375674 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:32.376252 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:32.376528 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:32.376913 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:32.377631 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:32.378092 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:32.378739 | orchestrator | 2025-05-19 21:37:32.379422 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-19 21:37:32.380672 | orchestrator | Monday 19 May 2025 21:37:32 +0000 (0:00:01.359) 0:00:13.738 ************ 2025-05-19 21:37:32.451238 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:32.473255 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:32.500861 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:32.524121 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:32.583703 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:32.583777 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:32.584347 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:32.589922 | orchestrator | 2025-05-19 21:37:32.589961 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-19 21:37:32.589975 | orchestrator | Monday 19 May 2025 21:37:32 +0000 (0:00:00.209) 0:00:13.947 ************ 2025-05-19 21:37:33.131189 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:33.131655 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:33.134151 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:33.134179 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:33.134941 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:33.135221 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:33.135762 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:33.136305 | orchestrator | 2025-05-19 21:37:33.136792 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-19 21:37:33.137138 | orchestrator | Monday 19 May 2025 21:37:33 +0000 (0:00:00.546) 0:00:14.494 ************ 2025-05-19 21:37:33.204572 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:33.236628 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:33.253095 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:33.281001 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:33.344638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:33.345643 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:33.347189 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:33.349127 | orchestrator | 2025-05-19 21:37:33.350163 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-19 21:37:33.350420 | orchestrator | Monday 19 May 2025 21:37:33 +0000 (0:00:00.213) 0:00:14.708 ************ 2025-05-19 21:37:33.881767 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:33.883124 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:33.883949 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:33.884770 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:33.886319 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:33.887759 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:33.888009 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:33.889011 | orchestrator | 2025-05-19 21:37:33.889929 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-19 21:37:33.891241 | orchestrator | Monday 19 May 2025 21:37:33 +0000 (0:00:00.534) 0:00:15.242 ************ 2025-05-19 21:37:34.982535 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:34.982668 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:34.982753 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:34.982771 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:34.984359 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:34.984452 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:34.984467 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:34.984479 | orchestrator | 2025-05-19 21:37:34.984556 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-19 21:37:34.984918 | orchestrator | Monday 19 May 2025 21:37:34 +0000 (0:00:01.102) 0:00:16.345 ************ 2025-05-19 21:37:36.194755 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:36.194952 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:36.195741 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:36.195927 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:36.196721 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:36.197229 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:36.197507 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:36.197678 | orchestrator | 2025-05-19 21:37:36.198106 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-19 21:37:36.198392 | orchestrator | Monday 19 May 2025 21:37:36 +0000 (0:00:01.212) 0:00:17.557 ************ 2025-05-19 21:37:36.557210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:36.557396 | orchestrator | 2025-05-19 21:37:36.559604 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-19 21:37:36.560276 | orchestrator | Monday 19 May 2025 21:37:36 +0000 (0:00:00.359) 0:00:17.916 ************ 2025-05-19 21:37:36.642600 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:37.792754 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:37.793388 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:37.794148 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:37.795137 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:37:37.795827 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:37.796346 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:37.796986 | orchestrator | 2025-05-19 21:37:37.798481 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-19 21:37:37.799637 | orchestrator | Monday 19 May 2025 21:37:37 +0000 (0:00:01.237) 0:00:19.154 ************ 2025-05-19 21:37:37.882507 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:37.918200 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:37.948611 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:37.979639 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:38.042315 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:38.043063 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:38.044278 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:38.044827 | orchestrator | 2025-05-19 21:37:38.045407 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-19 21:37:38.046169 | orchestrator | Monday 19 May 2025 21:37:38 +0000 (0:00:00.251) 0:00:19.405 ************ 2025-05-19 21:37:38.130174 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:38.161724 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:38.192455 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:38.223687 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:38.298487 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:38.298662 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:38.299123 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:38.299847 | orchestrator | 2025-05-19 21:37:38.302914 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-19 21:37:38.303970 | orchestrator | Monday 19 May 2025 21:37:38 +0000 (0:00:00.255) 0:00:19.661 ************ 2025-05-19 21:37:38.387065 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:38.420152 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:38.451317 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:38.492065 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:38.565231 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:38.566327 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:38.566805 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:38.567601 | orchestrator | 2025-05-19 21:37:38.568498 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-19 21:37:38.569233 | orchestrator | Monday 19 May 2025 21:37:38 +0000 (0:00:00.267) 0:00:19.928 ************ 2025-05-19 21:37:38.843342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:38.843499 | orchestrator | 2025-05-19 21:37:38.844218 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-19 21:37:38.845245 | orchestrator | Monday 19 May 2025 21:37:38 +0000 (0:00:00.277) 0:00:20.206 ************ 2025-05-19 21:37:39.387467 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:39.388819 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:39.388899 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:39.388914 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:39.389183 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:39.389207 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:39.389917 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:39.390468 | orchestrator | 2025-05-19 21:37:39.390944 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-19 21:37:39.393197 | orchestrator | Monday 19 May 2025 21:37:39 +0000 (0:00:00.534) 0:00:20.741 ************ 2025-05-19 21:37:39.471300 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:37:39.509030 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:37:39.537168 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:37:39.561344 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:37:39.643684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:37:39.643789 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:37:39.645150 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:37:39.645824 | orchestrator | 2025-05-19 21:37:39.646679 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-19 21:37:39.647190 | orchestrator | Monday 19 May 2025 21:37:39 +0000 (0:00:00.263) 0:00:21.004 ************ 2025-05-19 21:37:40.736075 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:40.737081 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:40.738084 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:40.738696 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:40.739658 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:40.740653 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:40.741220 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:40.742280 | orchestrator | 2025-05-19 21:37:40.742614 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-19 21:37:40.743357 | orchestrator | Monday 19 May 2025 21:37:40 +0000 (0:00:01.072) 0:00:22.077 ************ 2025-05-19 21:37:41.328487 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:41.328646 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:41.329612 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:41.330557 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:41.331164 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:41.331927 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:41.332291 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:41.332834 | orchestrator | 2025-05-19 21:37:41.333423 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-19 21:37:41.333966 | orchestrator | Monday 19 May 2025 21:37:41 +0000 (0:00:00.605) 0:00:22.683 ************ 2025-05-19 21:37:42.456688 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:42.456795 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:42.456870 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:42.457629 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:42.458388 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:42.459904 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:42.460626 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:42.461893 | orchestrator | 2025-05-19 21:37:42.463899 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-19 21:37:42.465075 | orchestrator | Monday 19 May 2025 21:37:42 +0000 (0:00:01.134) 0:00:23.817 ************ 2025-05-19 21:37:56.397555 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:56.397711 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:56.397728 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:56.397740 | orchestrator | changed: [testbed-manager] 2025-05-19 21:37:56.397753 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:56.397764 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:37:56.397776 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:37:56.397853 | orchestrator | 2025-05-19 21:37:56.398476 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-19 21:37:56.398585 | orchestrator | Monday 19 May 2025 21:37:56 +0000 (0:00:13.937) 0:00:37.755 ************ 2025-05-19 21:37:56.464476 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:56.492355 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:56.515646 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:56.543232 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:56.597377 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:56.598737 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:56.599625 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:56.600413 | orchestrator | 2025-05-19 21:37:56.601467 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-19 21:37:56.602166 | orchestrator | Monday 19 May 2025 21:37:56 +0000 (0:00:00.205) 0:00:37.961 ************ 2025-05-19 21:37:56.665781 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:56.693291 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:56.716013 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:56.744046 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:56.795787 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:56.796801 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:56.797557 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:56.798142 | orchestrator | 2025-05-19 21:37:56.799049 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-19 21:37:56.799826 | orchestrator | Monday 19 May 2025 21:37:56 +0000 (0:00:00.198) 0:00:38.159 ************ 2025-05-19 21:37:56.873722 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:56.908678 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:56.936472 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:56.958577 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:57.023133 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:57.023588 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:57.024364 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:57.025231 | orchestrator | 2025-05-19 21:37:57.025902 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-19 21:37:57.026351 | orchestrator | Monday 19 May 2025 21:37:57 +0000 (0:00:00.227) 0:00:38.387 ************ 2025-05-19 21:37:57.284547 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:37:57.285178 | orchestrator | 2025-05-19 21:37:57.285608 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-19 21:37:57.286457 | orchestrator | Monday 19 May 2025 21:37:57 +0000 (0:00:00.260) 0:00:38.647 ************ 2025-05-19 21:37:58.897552 | orchestrator | ok: [testbed-manager] 2025-05-19 21:37:58.897909 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:37:58.898710 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:37:58.900389 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:37:58.900576 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:37:58.901252 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:37:58.901562 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:37:58.902103 | orchestrator | 2025-05-19 21:37:58.902420 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-19 21:37:58.902850 | orchestrator | Monday 19 May 2025 21:37:58 +0000 (0:00:01.611) 0:00:40.259 ************ 2025-05-19 21:37:59.997071 | orchestrator | changed: [testbed-manager] 2025-05-19 21:37:59.997566 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:37:59.998482 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:37:59.999034 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:37:59.999541 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:38:00.000030 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:38:00.000561 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:38:00.001051 | orchestrator | 2025-05-19 21:38:00.001731 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-19 21:38:00.002092 | orchestrator | Monday 19 May 2025 21:37:59 +0000 (0:00:01.095) 0:00:41.355 ************ 2025-05-19 21:38:00.859409 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:00.860333 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:00.861436 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:00.862834 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:00.863853 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:00.864233 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:00.865145 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:00.865979 | orchestrator | 2025-05-19 21:38:00.866780 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-19 21:38:00.867721 | orchestrator | Monday 19 May 2025 21:38:00 +0000 (0:00:00.866) 0:00:42.222 ************ 2025-05-19 21:38:01.164816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:38:01.167615 | orchestrator | 2025-05-19 21:38:01.168232 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-19 21:38:01.169028 | orchestrator | Monday 19 May 2025 21:38:01 +0000 (0:00:00.303) 0:00:42.525 ************ 2025-05-19 21:38:02.197774 | orchestrator | changed: [testbed-manager] 2025-05-19 21:38:02.198235 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:38:02.199921 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:38:02.201051 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:38:02.201913 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:38:02.202513 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:38:02.203284 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:38:02.204048 | orchestrator | 2025-05-19 21:38:02.204776 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-19 21:38:02.205533 | orchestrator | Monday 19 May 2025 21:38:02 +0000 (0:00:01.033) 0:00:43.559 ************ 2025-05-19 21:38:02.303170 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:38:02.335217 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:38:02.359051 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:38:02.499555 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:38:02.500494 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:38:02.501577 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:38:02.504853 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:38:02.505019 | orchestrator | 2025-05-19 21:38:02.505064 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-19 21:38:02.505608 | orchestrator | Monday 19 May 2025 21:38:02 +0000 (0:00:00.303) 0:00:43.863 ************ 2025-05-19 21:38:13.682776 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:38:13.682899 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:38:13.682915 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:38:13.682926 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:38:13.684992 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:38:13.685809 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:38:13.687424 | orchestrator | changed: [testbed-manager] 2025-05-19 21:38:13.688187 | orchestrator | 2025-05-19 21:38:13.689055 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-19 21:38:13.689990 | orchestrator | Monday 19 May 2025 21:38:13 +0000 (0:00:11.177) 0:00:55.040 ************ 2025-05-19 21:38:14.706448 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:14.710108 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:14.711563 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:14.712922 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:14.713892 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:14.716295 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:14.717281 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:14.718413 | orchestrator | 2025-05-19 21:38:14.719334 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-19 21:38:14.720839 | orchestrator | Monday 19 May 2025 21:38:14 +0000 (0:00:01.027) 0:00:56.068 ************ 2025-05-19 21:38:15.575215 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:15.575675 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:15.578824 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:15.580006 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:15.582631 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:15.583709 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:15.585035 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:15.586206 | orchestrator | 2025-05-19 21:38:15.589017 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-19 21:38:15.590000 | orchestrator | Monday 19 May 2025 21:38:15 +0000 (0:00:00.869) 0:00:56.937 ************ 2025-05-19 21:38:15.650901 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:15.677482 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:15.704756 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:15.731342 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:15.800978 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:15.801154 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:15.801408 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:15.801763 | orchestrator | 2025-05-19 21:38:15.802373 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-19 21:38:15.802962 | orchestrator | Monday 19 May 2025 21:38:15 +0000 (0:00:00.227) 0:00:57.164 ************ 2025-05-19 21:38:15.878835 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:15.910537 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:15.929986 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:15.956010 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:16.013003 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:16.013975 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:16.014679 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:16.015355 | orchestrator | 2025-05-19 21:38:16.016869 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-19 21:38:16.016892 | orchestrator | Monday 19 May 2025 21:38:16 +0000 (0:00:00.212) 0:00:57.377 ************ 2025-05-19 21:38:16.313078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:38:16.313545 | orchestrator | 2025-05-19 21:38:16.314573 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-19 21:38:16.315410 | orchestrator | Monday 19 May 2025 21:38:16 +0000 (0:00:00.298) 0:00:57.675 ************ 2025-05-19 21:38:17.852362 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:17.854322 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:17.854723 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:17.855449 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:17.859135 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:17.860251 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:17.861306 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:17.862105 | orchestrator | 2025-05-19 21:38:17.863107 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-19 21:38:17.863875 | orchestrator | Monday 19 May 2025 21:38:17 +0000 (0:00:01.537) 0:00:59.213 ************ 2025-05-19 21:38:18.386257 | orchestrator | changed: [testbed-manager] 2025-05-19 21:38:18.386628 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:38:18.387740 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:38:18.388357 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:38:18.389167 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:38:18.390953 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:38:18.391864 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:38:18.392739 | orchestrator | 2025-05-19 21:38:18.393067 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-19 21:38:18.393514 | orchestrator | Monday 19 May 2025 21:38:18 +0000 (0:00:00.535) 0:00:59.749 ************ 2025-05-19 21:38:18.468789 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:18.499734 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:18.540359 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:18.563852 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:18.638782 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:18.641586 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:18.641904 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:18.642644 | orchestrator | 2025-05-19 21:38:18.643081 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-19 21:38:18.643808 | orchestrator | Monday 19 May 2025 21:38:18 +0000 (0:00:00.252) 0:01:00.002 ************ 2025-05-19 21:38:19.757030 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:19.758366 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:19.758415 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:19.759533 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:19.763125 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:19.763387 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:19.764283 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:19.764775 | orchestrator | 2025-05-19 21:38:19.765614 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-19 21:38:19.766391 | orchestrator | Monday 19 May 2025 21:38:19 +0000 (0:00:01.117) 0:01:01.119 ************ 2025-05-19 21:38:21.346290 | orchestrator | changed: [testbed-manager] 2025-05-19 21:38:21.346381 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:38:21.346431 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:38:21.347646 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:38:21.347819 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:38:21.348733 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:38:21.349575 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:38:21.350257 | orchestrator | 2025-05-19 21:38:21.351521 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-19 21:38:21.352327 | orchestrator | Monday 19 May 2025 21:38:21 +0000 (0:00:01.586) 0:01:02.705 ************ 2025-05-19 21:38:23.840236 | orchestrator | ok: [testbed-manager] 2025-05-19 21:38:23.840363 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:38:23.840379 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:38:23.840701 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:38:23.841421 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:38:23.841744 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:38:23.844418 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:38:23.844778 | orchestrator | 2025-05-19 21:38:23.846386 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-19 21:38:23.846630 | orchestrator | Monday 19 May 2025 21:38:23 +0000 (0:00:02.493) 0:01:05.199 ************ 2025-05-19 21:39:00.422330 | orchestrator | ok: [testbed-manager] 2025-05-19 21:39:00.423237 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:39:00.424247 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:39:00.424773 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:39:00.427067 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:39:00.427642 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:39:00.428629 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:39:00.429364 | orchestrator | 2025-05-19 21:39:00.430067 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-19 21:39:00.430643 | orchestrator | Monday 19 May 2025 21:39:00 +0000 (0:00:36.581) 0:01:41.781 ************ 2025-05-19 21:40:16.503372 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:16.503576 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:40:16.503593 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:40:16.503604 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:40:16.503684 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:40:16.504612 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:40:16.505143 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:40:16.505903 | orchestrator | 2025-05-19 21:40:16.506843 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-19 21:40:16.507208 | orchestrator | Monday 19 May 2025 21:40:16 +0000 (0:01:16.080) 0:02:57.862 ************ 2025-05-19 21:40:18.169194 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:18.169838 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:18.171911 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:18.172925 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:18.173915 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:18.174762 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:18.175134 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:18.175855 | orchestrator | 2025-05-19 21:40:18.176482 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-19 21:40:18.177139 | orchestrator | Monday 19 May 2025 21:40:18 +0000 (0:00:01.669) 0:02:59.531 ************ 2025-05-19 21:40:29.247763 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:29.247890 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:29.247907 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:29.248792 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:29.250494 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:29.252115 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:29.252845 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:29.253674 | orchestrator | 2025-05-19 21:40:29.254534 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-19 21:40:29.256911 | orchestrator | Monday 19 May 2025 21:40:29 +0000 (0:00:11.075) 0:03:10.606 ************ 2025-05-19 21:40:29.605461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-19 21:40:29.605587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-19 21:40:29.605607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-19 21:40:29.606172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-19 21:40:29.607686 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-19 21:40:29.608475 | orchestrator | 2025-05-19 21:40:29.609680 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-19 21:40:29.611140 | orchestrator | Monday 19 May 2025 21:40:29 +0000 (0:00:00.360) 0:03:10.967 ************ 2025-05-19 21:40:29.672791 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 21:40:29.696458 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:29.696733 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 21:40:29.729955 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:40:29.732785 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 21:40:29.733102 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-19 21:40:29.749418 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:40:29.773126 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:40:30.350711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 21:40:30.354873 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 21:40:30.356191 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 21:40:30.357830 | orchestrator | 2025-05-19 21:40:30.359558 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-19 21:40:30.360549 | orchestrator | Monday 19 May 2025 21:40:30 +0000 (0:00:00.746) 0:03:11.713 ************ 2025-05-19 21:40:30.406502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 21:40:30.407006 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 21:40:30.407773 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 21:40:30.408558 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 21:40:30.409378 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 21:40:30.411662 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 21:40:30.441591 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 21:40:30.442530 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 21:40:30.443290 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 21:40:30.444131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 21:40:30.445197 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 21:40:30.445903 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 21:40:30.447987 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 21:40:30.448549 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 21:40:30.449179 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 21:40:30.449643 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 21:40:30.450378 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 21:40:30.452883 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 21:40:30.453453 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 21:40:30.477382 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:30.478239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 21:40:30.481125 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 21:40:30.482128 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 21:40:30.482894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 21:40:30.483943 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 21:40:30.484460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 21:40:30.485203 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 21:40:30.486976 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 21:40:30.518309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 21:40:30.518897 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:40:30.519703 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 21:40:30.520662 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 21:40:30.521284 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-19 21:40:30.524286 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-19 21:40:30.525117 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-19 21:40:30.525848 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-19 21:40:30.526559 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-19 21:40:30.527927 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-19 21:40:30.528488 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-19 21:40:30.548490 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-19 21:40:30.549159 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:40:30.549841 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-19 21:40:30.553090 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-19 21:40:34.946171 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:40:34.946323 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 21:40:34.946417 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 21:40:34.946434 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-19 21:40:34.948386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 21:40:34.949823 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 21:40:34.950645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-19 21:40:34.951462 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 21:40:34.952681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 21:40:34.953181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-19 21:40:34.954181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 21:40:34.954841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 21:40:34.955766 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-19 21:40:34.956469 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 21:40:34.957210 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 21:40:34.957973 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-19 21:40:34.958775 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 21:40:34.959327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 21:40:34.959830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-19 21:40:34.960393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 21:40:34.960896 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 21:40:34.961401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-19 21:40:34.961963 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 21:40:34.962439 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 21:40:34.963003 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 21:40:34.963394 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 21:40:34.963996 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 21:40:34.964677 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 21:40:34.964985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-19 21:40:34.965505 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-19 21:40:34.965882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-19 21:40:34.966421 | orchestrator | 2025-05-19 21:40:34.966896 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-19 21:40:34.967204 | orchestrator | Monday 19 May 2025 21:40:34 +0000 (0:00:04.595) 0:03:16.308 ************ 2025-05-19 21:40:36.504098 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535761 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535817 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535830 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535841 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535863 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-19 21:40:36.535874 | orchestrator | 2025-05-19 21:40:36.535886 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-19 21:40:36.535898 | orchestrator | Monday 19 May 2025 21:40:36 +0000 (0:00:01.557) 0:03:17.866 ************ 2025-05-19 21:40:36.562509 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 21:40:36.589577 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:36.648938 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 21:40:36.681328 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:40:37.019133 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 21:40:37.019269 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:40:37.019299 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-19 21:40:37.020858 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:40:37.021410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 21:40:37.022442 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 21:40:37.023541 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-19 21:40:37.024249 | orchestrator | 2025-05-19 21:40:37.025101 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-19 21:40:37.026105 | orchestrator | Monday 19 May 2025 21:40:37 +0000 (0:00:00.512) 0:03:18.378 ************ 2025-05-19 21:40:37.071395 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 21:40:37.094161 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:37.170866 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 21:40:37.566274 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 21:40:37.566369 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:40:37.566970 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:40:37.569130 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-19 21:40:37.570322 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:40:37.571424 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 21:40:37.572241 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 21:40:37.573347 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-19 21:40:37.574524 | orchestrator | 2025-05-19 21:40:37.575595 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-19 21:40:37.577567 | orchestrator | Monday 19 May 2025 21:40:37 +0000 (0:00:00.550) 0:03:18.929 ************ 2025-05-19 21:40:37.647138 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:37.671351 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:40:37.699552 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:40:37.719966 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:40:37.840464 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:40:37.840637 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:40:37.845265 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:40:37.845306 | orchestrator | 2025-05-19 21:40:37.845324 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-19 21:40:37.845337 | orchestrator | Monday 19 May 2025 21:40:37 +0000 (0:00:00.274) 0:03:19.204 ************ 2025-05-19 21:40:43.621044 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:43.621973 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:43.622941 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:43.623190 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:43.624843 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:43.626123 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:43.626955 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:43.627987 | orchestrator | 2025-05-19 21:40:43.628762 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-19 21:40:43.629497 | orchestrator | Monday 19 May 2025 21:40:43 +0000 (0:00:05.779) 0:03:24.984 ************ 2025-05-19 21:40:43.697646 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-19 21:40:43.734414 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-19 21:40:43.734963 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:43.736035 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-19 21:40:43.769876 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:40:43.802359 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-19 21:40:43.803091 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:40:43.837629 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-19 21:40:43.838126 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:40:43.839543 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-19 21:40:43.896258 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:40:43.897064 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:40:43.898932 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-19 21:40:43.899852 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:40:43.899990 | orchestrator | 2025-05-19 21:40:43.900969 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-19 21:40:43.902650 | orchestrator | Monday 19 May 2025 21:40:43 +0000 (0:00:00.277) 0:03:25.261 ************ 2025-05-19 21:40:44.913567 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-19 21:40:44.914477 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-19 21:40:44.915882 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-19 21:40:44.917427 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-19 21:40:44.918620 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-19 21:40:44.919498 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-19 21:40:44.920456 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-19 21:40:44.921618 | orchestrator | 2025-05-19 21:40:44.922825 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-19 21:40:44.923240 | orchestrator | Monday 19 May 2025 21:40:44 +0000 (0:00:01.015) 0:03:26.276 ************ 2025-05-19 21:40:45.368053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:40:45.368720 | orchestrator | 2025-05-19 21:40:45.371305 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-19 21:40:45.371340 | orchestrator | Monday 19 May 2025 21:40:45 +0000 (0:00:00.453) 0:03:26.730 ************ 2025-05-19 21:40:46.616222 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:46.616329 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:46.616342 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:46.616352 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:46.616526 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:46.617067 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:46.617725 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:46.618252 | orchestrator | 2025-05-19 21:40:46.618942 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-19 21:40:46.619331 | orchestrator | Monday 19 May 2025 21:40:46 +0000 (0:00:01.244) 0:03:27.975 ************ 2025-05-19 21:40:47.212807 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:47.213452 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:47.213721 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:47.214930 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:47.215367 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:47.215885 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:47.216345 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:47.216682 | orchestrator | 2025-05-19 21:40:47.218506 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-19 21:40:47.218547 | orchestrator | Monday 19 May 2025 21:40:47 +0000 (0:00:00.600) 0:03:28.575 ************ 2025-05-19 21:40:47.826128 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:47.829762 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:40:47.830913 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:40:47.832101 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:40:47.832910 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:40:47.834592 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:40:47.834955 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:40:47.835759 | orchestrator | 2025-05-19 21:40:47.836458 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-19 21:40:47.837129 | orchestrator | Monday 19 May 2025 21:40:47 +0000 (0:00:00.613) 0:03:29.189 ************ 2025-05-19 21:40:48.396889 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:48.398082 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:48.398725 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:48.399556 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:48.400630 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:48.402228 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:48.403399 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:48.404529 | orchestrator | 2025-05-19 21:40:48.405366 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-19 21:40:48.406578 | orchestrator | Monday 19 May 2025 21:40:48 +0000 (0:00:00.571) 0:03:29.760 ************ 2025-05-19 21:40:49.318439 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689189.753145, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.319171 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689231.3723552, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.322641 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689218.2096572, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.323421 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689229.309374, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.324279 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689224.5198355, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.325100 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689215.703427, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.325239 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1747689227.5172846, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.325584 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689210.2222898, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.326261 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689148.7972517, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.326789 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689138.3696756, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.327054 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689138.0344088, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.327746 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689142.7924414, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.328111 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689152.4804215, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.328521 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1747689148.911639, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 21:40:49.328787 | orchestrator | 2025-05-19 21:40:49.329205 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-19 21:40:49.329649 | orchestrator | Monday 19 May 2025 21:40:49 +0000 (0:00:00.920) 0:03:30.680 ************ 2025-05-19 21:40:50.378976 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:50.379087 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:40:50.379938 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:40:50.380671 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:40:50.381775 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:40:50.382478 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:40:50.383251 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:40:50.384078 | orchestrator | 2025-05-19 21:40:50.384794 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-19 21:40:50.385330 | orchestrator | Monday 19 May 2025 21:40:50 +0000 (0:00:01.061) 0:03:31.742 ************ 2025-05-19 21:40:51.520299 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:51.520410 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:40:51.521106 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:40:51.521486 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:40:51.522314 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:40:51.522873 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:40:51.523683 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:40:51.524189 | orchestrator | 2025-05-19 21:40:51.525074 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-19 21:40:51.525583 | orchestrator | Monday 19 May 2025 21:40:51 +0000 (0:00:01.140) 0:03:32.882 ************ 2025-05-19 21:40:52.658327 | orchestrator | changed: [testbed-manager] 2025-05-19 21:40:52.658437 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:40:52.660902 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:40:52.660927 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:40:52.660938 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:40:52.660949 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:40:52.663040 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:40:52.663067 | orchestrator | 2025-05-19 21:40:52.664049 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-19 21:40:52.664433 | orchestrator | Monday 19 May 2025 21:40:52 +0000 (0:00:01.137) 0:03:34.020 ************ 2025-05-19 21:40:52.723373 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:40:52.753301 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:40:52.783023 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:40:52.812545 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:40:52.841238 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:40:52.890925 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:40:52.891117 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:40:52.891793 | orchestrator | 2025-05-19 21:40:52.892577 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-19 21:40:52.892817 | orchestrator | Monday 19 May 2025 21:40:52 +0000 (0:00:00.235) 0:03:34.256 ************ 2025-05-19 21:40:53.641564 | orchestrator | ok: [testbed-manager] 2025-05-19 21:40:53.642422 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:40:53.643164 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:40:53.644246 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:40:53.644798 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:40:53.646000 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:40:53.646734 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:40:53.647368 | orchestrator | 2025-05-19 21:40:53.648224 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-19 21:40:53.649075 | orchestrator | Monday 19 May 2025 21:40:53 +0000 (0:00:00.745) 0:03:35.001 ************ 2025-05-19 21:40:54.045663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:40:54.047499 | orchestrator | 2025-05-19 21:40:54.048881 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-19 21:40:54.050002 | orchestrator | Monday 19 May 2025 21:40:54 +0000 (0:00:00.405) 0:03:35.407 ************ 2025-05-19 21:41:01.963448 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:01.964708 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:01.966435 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:01.968371 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:01.969332 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:01.970916 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:01.971627 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:01.972891 | orchestrator | 2025-05-19 21:41:01.973655 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-19 21:41:01.974702 | orchestrator | Monday 19 May 2025 21:41:01 +0000 (0:00:07.918) 0:03:43.325 ************ 2025-05-19 21:41:03.159453 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:03.161048 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:03.161135 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:03.163618 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:03.164453 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:03.165215 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:03.165538 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:03.167045 | orchestrator | 2025-05-19 21:41:03.167795 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-19 21:41:03.168093 | orchestrator | Monday 19 May 2025 21:41:03 +0000 (0:00:01.191) 0:03:44.517 ************ 2025-05-19 21:41:04.301035 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:04.301147 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:04.301164 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:04.301176 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:04.301616 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:04.302802 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:04.305096 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:04.305124 | orchestrator | 2025-05-19 21:41:04.305381 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-19 21:41:04.306203 | orchestrator | Monday 19 May 2025 21:41:04 +0000 (0:00:01.142) 0:03:45.659 ************ 2025-05-19 21:41:04.945473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:41:04.947628 | orchestrator | 2025-05-19 21:41:04.948557 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-19 21:41:04.951734 | orchestrator | Monday 19 May 2025 21:41:04 +0000 (0:00:00.649) 0:03:46.308 ************ 2025-05-19 21:41:13.284263 | orchestrator | changed: [testbed-manager] 2025-05-19 21:41:13.284387 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:13.284841 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:13.285972 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:13.290115 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:13.290598 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:13.291065 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:13.291732 | orchestrator | 2025-05-19 21:41:13.292361 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-19 21:41:13.293315 | orchestrator | Monday 19 May 2025 21:41:13 +0000 (0:00:08.336) 0:03:54.645 ************ 2025-05-19 21:41:13.885951 | orchestrator | changed: [testbed-manager] 2025-05-19 21:41:13.886939 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:13.887695 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:13.888793 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:13.889403 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:13.890812 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:13.892097 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:13.892532 | orchestrator | 2025-05-19 21:41:13.893704 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-19 21:41:13.894523 | orchestrator | Monday 19 May 2025 21:41:13 +0000 (0:00:00.604) 0:03:55.249 ************ 2025-05-19 21:41:15.026265 | orchestrator | changed: [testbed-manager] 2025-05-19 21:41:15.029448 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:15.030472 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:15.031337 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:15.033174 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:15.034117 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:15.035276 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:15.037079 | orchestrator | 2025-05-19 21:41:15.037104 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-19 21:41:15.037950 | orchestrator | Monday 19 May 2025 21:41:15 +0000 (0:00:01.138) 0:03:56.388 ************ 2025-05-19 21:41:16.055118 | orchestrator | changed: [testbed-manager] 2025-05-19 21:41:16.057790 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:16.057820 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:16.057832 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:16.058196 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:16.059000 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:16.059751 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:16.060382 | orchestrator | 2025-05-19 21:41:16.060953 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-19 21:41:16.061478 | orchestrator | Monday 19 May 2025 21:41:16 +0000 (0:00:01.027) 0:03:57.415 ************ 2025-05-19 21:41:16.159612 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:16.193159 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:16.245270 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:16.279267 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:16.346198 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:16.347477 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:16.348299 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:16.349099 | orchestrator | 2025-05-19 21:41:16.350159 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-19 21:41:16.351315 | orchestrator | Monday 19 May 2025 21:41:16 +0000 (0:00:00.292) 0:03:57.708 ************ 2025-05-19 21:41:16.459923 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:16.492294 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:16.524852 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:16.557310 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:16.637057 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:16.637160 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:16.637179 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:16.637436 | orchestrator | 2025-05-19 21:41:16.638102 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-19 21:41:16.638593 | orchestrator | Monday 19 May 2025 21:41:16 +0000 (0:00:00.290) 0:03:57.999 ************ 2025-05-19 21:41:16.761136 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:16.800813 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:16.835796 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:16.873397 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:16.963525 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:16.965560 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:16.965591 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:16.968733 | orchestrator | 2025-05-19 21:41:16.969792 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-19 21:41:16.970253 | orchestrator | Monday 19 May 2025 21:41:16 +0000 (0:00:00.327) 0:03:58.327 ************ 2025-05-19 21:41:22.655300 | orchestrator | ok: [testbed-manager] 2025-05-19 21:41:22.655444 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:41:22.655469 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:41:22.655617 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:41:22.656039 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:41:22.656283 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:41:22.656725 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:41:22.657250 | orchestrator | 2025-05-19 21:41:22.657281 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-19 21:41:22.657585 | orchestrator | Monday 19 May 2025 21:41:22 +0000 (0:00:05.691) 0:04:04.018 ************ 2025-05-19 21:41:23.084215 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:41:23.084560 | orchestrator | 2025-05-19 21:41:23.085337 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-19 21:41:23.087885 | orchestrator | Monday 19 May 2025 21:41:23 +0000 (0:00:00.428) 0:04:04.447 ************ 2025-05-19 21:41:23.155058 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.155136 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-19 21:41:23.199782 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:41:23.199850 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.201470 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-19 21:41:23.201496 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.237204 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-19 21:41:23.237938 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:41:23.284641 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:41:23.285678 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.287455 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-19 21:41:23.291905 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.337119 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:41:23.337813 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-19 21:41:23.341519 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.341549 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-19 21:41:23.414218 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:41:23.415517 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:41:23.417488 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-19 21:41:23.418544 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-19 21:41:23.419993 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:41:23.420999 | orchestrator | 2025-05-19 21:41:23.422014 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-19 21:41:23.422936 | orchestrator | Monday 19 May 2025 21:41:23 +0000 (0:00:00.330) 0:04:04.778 ************ 2025-05-19 21:41:23.817116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:41:23.819442 | orchestrator | 2025-05-19 21:41:23.821965 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-19 21:41:23.822818 | orchestrator | Monday 19 May 2025 21:41:23 +0000 (0:00:00.401) 0:04:05.179 ************ 2025-05-19 21:41:23.909478 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-19 21:41:23.911091 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-19 21:41:23.952913 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:41:23.988974 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:41:23.989785 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-19 21:41:24.026478 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-19 21:41:24.027197 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:41:24.028311 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-19 21:41:24.063311 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:41:24.064050 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-19 21:41:24.155601 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:41:24.156726 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:41:24.157612 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-19 21:41:24.159851 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:41:24.160258 | orchestrator | 2025-05-19 21:41:24.161187 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-19 21:41:24.161891 | orchestrator | Monday 19 May 2025 21:41:24 +0000 (0:00:00.337) 0:04:05.516 ************ 2025-05-19 21:41:24.667622 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:41:24.667998 | orchestrator | 2025-05-19 21:41:24.668411 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-19 21:41:24.669388 | orchestrator | Monday 19 May 2025 21:41:24 +0000 (0:00:00.514) 0:04:06.031 ************ 2025-05-19 21:41:59.126859 | orchestrator | changed: [testbed-manager] 2025-05-19 21:41:59.126960 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:41:59.126998 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:41:59.127006 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:41:59.127012 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:41:59.127019 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:41:59.127072 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:41:59.128740 | orchestrator | 2025-05-19 21:41:59.129086 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-19 21:41:59.132894 | orchestrator | Monday 19 May 2025 21:41:59 +0000 (0:00:34.452) 0:04:40.483 ************ 2025-05-19 21:42:06.995545 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:06.999212 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:06.999260 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:06.999273 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:06.999285 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:06.999296 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:06.999307 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:06.999319 | orchestrator | 2025-05-19 21:42:06.999331 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-19 21:42:06.999396 | orchestrator | Monday 19 May 2025 21:42:06 +0000 (0:00:07.875) 0:04:48.359 ************ 2025-05-19 21:42:14.717590 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:14.717810 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:14.718559 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:14.720903 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:14.725250 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:14.725786 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:14.726074 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:14.726688 | orchestrator | 2025-05-19 21:42:14.727258 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-19 21:42:14.728029 | orchestrator | Monday 19 May 2025 21:42:14 +0000 (0:00:07.719) 0:04:56.078 ************ 2025-05-19 21:42:16.368842 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:16.369103 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:16.369710 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:16.370746 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:16.371563 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:16.372590 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:16.373036 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:16.374440 | orchestrator | 2025-05-19 21:42:16.374531 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-19 21:42:16.375254 | orchestrator | Monday 19 May 2025 21:42:16 +0000 (0:00:01.652) 0:04:57.731 ************ 2025-05-19 21:42:21.965924 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:21.966977 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:21.968994 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:21.970461 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:21.971101 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:21.972090 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:21.973764 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:21.974393 | orchestrator | 2025-05-19 21:42:21.975407 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-19 21:42:21.975919 | orchestrator | Monday 19 May 2025 21:42:21 +0000 (0:00:05.596) 0:05:03.327 ************ 2025-05-19 21:42:22.400937 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:42:22.401329 | orchestrator | 2025-05-19 21:42:22.401861 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-19 21:42:22.402844 | orchestrator | Monday 19 May 2025 21:42:22 +0000 (0:00:00.435) 0:05:03.763 ************ 2025-05-19 21:42:23.143991 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:23.144639 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:23.145961 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:23.146963 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:23.147970 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:23.148804 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:23.151503 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:23.151553 | orchestrator | 2025-05-19 21:42:23.151567 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-19 21:42:23.152489 | orchestrator | Monday 19 May 2025 21:42:23 +0000 (0:00:00.742) 0:05:04.505 ************ 2025-05-19 21:42:24.834621 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:24.836339 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:24.836412 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:24.836475 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:24.837787 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:24.838953 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:24.839657 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:24.840573 | orchestrator | 2025-05-19 21:42:24.841494 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-19 21:42:24.842006 | orchestrator | Monday 19 May 2025 21:42:24 +0000 (0:00:01.690) 0:05:06.196 ************ 2025-05-19 21:42:25.598302 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:25.598495 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:25.599192 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:25.602811 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:25.605774 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:25.609048 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:25.610000 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:25.610721 | orchestrator | 2025-05-19 21:42:25.610960 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-19 21:42:25.611769 | orchestrator | Monday 19 May 2025 21:42:25 +0000 (0:00:00.758) 0:05:06.955 ************ 2025-05-19 21:42:25.668301 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:25.707426 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:25.743786 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:25.780791 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:25.814231 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:25.875806 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:25.876443 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:25.876944 | orchestrator | 2025-05-19 21:42:25.877817 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-19 21:42:25.878774 | orchestrator | Monday 19 May 2025 21:42:25 +0000 (0:00:00.284) 0:05:07.239 ************ 2025-05-19 21:42:25.942247 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:25.974869 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:26.033855 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:26.075615 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:26.108143 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:26.288474 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:26.288837 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:26.290759 | orchestrator | 2025-05-19 21:42:26.290795 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-19 21:42:26.290808 | orchestrator | Monday 19 May 2025 21:42:26 +0000 (0:00:00.411) 0:05:07.651 ************ 2025-05-19 21:42:26.413075 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:26.450203 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:26.493663 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:26.526310 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:26.596622 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:26.596745 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:26.596934 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:26.597179 | orchestrator | 2025-05-19 21:42:26.598303 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-19 21:42:26.598971 | orchestrator | Monday 19 May 2025 21:42:26 +0000 (0:00:00.307) 0:05:07.959 ************ 2025-05-19 21:42:26.687940 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:26.720636 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:26.756086 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:26.786186 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:26.814602 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:26.891217 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:26.893697 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:26.897151 | orchestrator | 2025-05-19 21:42:26.897858 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-19 21:42:26.898560 | orchestrator | Monday 19 May 2025 21:42:26 +0000 (0:00:00.295) 0:05:08.255 ************ 2025-05-19 21:42:26.977359 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:27.052693 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:27.096717 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:27.147146 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:27.220994 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:27.222196 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:27.223108 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:27.223765 | orchestrator | 2025-05-19 21:42:27.224327 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-19 21:42:27.225033 | orchestrator | Monday 19 May 2025 21:42:27 +0000 (0:00:00.331) 0:05:08.586 ************ 2025-05-19 21:42:27.325977 | orchestrator | ok: [testbed-manager] =>  2025-05-19 21:42:27.326125 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.356716 | orchestrator | ok: [testbed-node-3] =>  2025-05-19 21:42:27.357570 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.405228 | orchestrator | ok: [testbed-node-4] =>  2025-05-19 21:42:27.406122 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.543490 | orchestrator | ok: [testbed-node-5] =>  2025-05-19 21:42:27.544516 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.640105 | orchestrator | ok: [testbed-node-0] =>  2025-05-19 21:42:27.640216 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.640231 | orchestrator | ok: [testbed-node-1] =>  2025-05-19 21:42:27.640242 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.640253 | orchestrator | ok: [testbed-node-2] =>  2025-05-19 21:42:27.640264 | orchestrator |  docker_version: 5:27.5.1 2025-05-19 21:42:27.640275 | orchestrator | 2025-05-19 21:42:27.640288 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-19 21:42:27.640301 | orchestrator | Monday 19 May 2025 21:42:27 +0000 (0:00:00.408) 0:05:08.994 ************ 2025-05-19 21:42:27.708455 | orchestrator | ok: [testbed-manager] =>  2025-05-19 21:42:27.708748 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.743861 | orchestrator | ok: [testbed-node-3] =>  2025-05-19 21:42:27.744104 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.812239 | orchestrator | ok: [testbed-node-4] =>  2025-05-19 21:42:27.813193 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.853228 | orchestrator | ok: [testbed-node-5] =>  2025-05-19 21:42:27.853355 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.908995 | orchestrator | ok: [testbed-node-0] =>  2025-05-19 21:42:27.910302 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.911421 | orchestrator | ok: [testbed-node-1] =>  2025-05-19 21:42:27.913154 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.914214 | orchestrator | ok: [testbed-node-2] =>  2025-05-19 21:42:27.914908 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-19 21:42:27.916255 | orchestrator | 2025-05-19 21:42:27.917317 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-19 21:42:27.918371 | orchestrator | Monday 19 May 2025 21:42:27 +0000 (0:00:00.279) 0:05:09.274 ************ 2025-05-19 21:42:27.978896 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:28.014837 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:28.059811 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:28.094382 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:28.125496 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:28.182781 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:28.183352 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:28.184049 | orchestrator | 2025-05-19 21:42:28.184482 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-19 21:42:28.185408 | orchestrator | Monday 19 May 2025 21:42:28 +0000 (0:00:00.272) 0:05:09.546 ************ 2025-05-19 21:42:28.278937 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:28.308359 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:28.348035 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:28.380117 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:28.431796 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:28.433824 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:28.433904 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:28.434225 | orchestrator | 2025-05-19 21:42:28.435120 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-19 21:42:28.441176 | orchestrator | Monday 19 May 2025 21:42:28 +0000 (0:00:00.249) 0:05:09.796 ************ 2025-05-19 21:42:28.852191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:42:28.852476 | orchestrator | 2025-05-19 21:42:28.853648 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-19 21:42:28.854807 | orchestrator | Monday 19 May 2025 21:42:28 +0000 (0:00:00.417) 0:05:10.213 ************ 2025-05-19 21:42:29.683016 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:29.684422 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:29.684456 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:29.684659 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:29.686668 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:29.687537 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:29.688119 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:29.689044 | orchestrator | 2025-05-19 21:42:29.689840 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-19 21:42:29.690271 | orchestrator | Monday 19 May 2025 21:42:29 +0000 (0:00:00.832) 0:05:11.045 ************ 2025-05-19 21:42:32.458005 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:42:32.458223 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:42:32.459416 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:42:32.462620 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:42:32.462645 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:42:32.462684 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:42:32.462696 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:32.462707 | orchestrator | 2025-05-19 21:42:32.463469 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-19 21:42:32.464325 | orchestrator | Monday 19 May 2025 21:42:32 +0000 (0:00:02.773) 0:05:13.819 ************ 2025-05-19 21:42:32.538347 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-19 21:42:32.538481 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-19 21:42:32.539479 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-19 21:42:32.612524 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:42:32.612815 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-19 21:42:32.614095 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-19 21:42:32.615122 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-19 21:42:32.687087 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:32.687733 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-19 21:42:32.688651 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-19 21:42:32.690930 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-19 21:42:32.922314 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-19 21:42:32.922446 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-19 21:42:32.922520 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-19 21:42:32.991657 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:32.991919 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-19 21:42:32.992119 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-19 21:42:32.993906 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-19 21:42:33.065079 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:33.065718 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-19 21:42:33.067251 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-19 21:42:33.068263 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-19 21:42:33.215784 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:33.215901 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:33.217419 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-19 21:42:33.217527 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-19 21:42:33.218756 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-19 21:42:33.219368 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:33.221323 | orchestrator | 2025-05-19 21:42:33.221345 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-19 21:42:33.221359 | orchestrator | Monday 19 May 2025 21:42:33 +0000 (0:00:00.755) 0:05:14.575 ************ 2025-05-19 21:42:39.498361 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:39.499009 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:39.499670 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:39.500873 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:39.501754 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:39.502642 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:39.503466 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:39.504399 | orchestrator | 2025-05-19 21:42:39.505139 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-19 21:42:39.505952 | orchestrator | Monday 19 May 2025 21:42:39 +0000 (0:00:06.284) 0:05:20.859 ************ 2025-05-19 21:42:40.541439 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:40.542198 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:40.543480 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:40.544193 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:40.544826 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:40.545963 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:40.547120 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:40.547896 | orchestrator | 2025-05-19 21:42:40.548413 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-19 21:42:40.548891 | orchestrator | Monday 19 May 2025 21:42:40 +0000 (0:00:01.044) 0:05:21.903 ************ 2025-05-19 21:42:47.936828 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:47.937062 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:47.937933 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:47.939476 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:47.941474 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:47.941527 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:47.942002 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:47.942600 | orchestrator | 2025-05-19 21:42:47.943368 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-19 21:42:47.944265 | orchestrator | Monday 19 May 2025 21:42:47 +0000 (0:00:07.394) 0:05:29.298 ************ 2025-05-19 21:42:51.063105 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:51.063616 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:51.066108 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:51.067177 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:51.068311 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:51.069048 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:51.069504 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:51.070200 | orchestrator | 2025-05-19 21:42:51.070917 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-19 21:42:51.072143 | orchestrator | Monday 19 May 2025 21:42:51 +0000 (0:00:03.125) 0:05:32.423 ************ 2025-05-19 21:42:52.575502 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:52.576637 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:52.578117 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:52.578928 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:52.580995 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:52.582158 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:52.583094 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:52.583699 | orchestrator | 2025-05-19 21:42:52.584874 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-19 21:42:52.585176 | orchestrator | Monday 19 May 2025 21:42:52 +0000 (0:00:01.512) 0:05:33.936 ************ 2025-05-19 21:42:53.872227 | orchestrator | ok: [testbed-manager] 2025-05-19 21:42:53.872499 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:42:53.873954 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:42:53.874970 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:42:53.876163 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:42:53.877221 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:42:53.879409 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:42:53.880604 | orchestrator | 2025-05-19 21:42:53.880993 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-19 21:42:53.881806 | orchestrator | Monday 19 May 2025 21:42:53 +0000 (0:00:01.297) 0:05:35.234 ************ 2025-05-19 21:42:54.086695 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:42:54.147743 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:42:54.213240 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:42:54.283081 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:42:54.448711 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:42:54.454388 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:42:54.455800 | orchestrator | changed: [testbed-manager] 2025-05-19 21:42:54.455842 | orchestrator | 2025-05-19 21:42:54.456585 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-19 21:42:54.457700 | orchestrator | Monday 19 May 2025 21:42:54 +0000 (0:00:00.575) 0:05:35.810 ************ 2025-05-19 21:43:04.302866 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:04.303894 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:04.305265 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:04.305872 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:04.307393 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:04.308986 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:04.309683 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:04.310853 | orchestrator | 2025-05-19 21:43:04.311710 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-19 21:43:04.312359 | orchestrator | Monday 19 May 2025 21:43:04 +0000 (0:00:09.854) 0:05:45.664 ************ 2025-05-19 21:43:05.421113 | orchestrator | changed: [testbed-manager] 2025-05-19 21:43:05.421284 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:05.421838 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:05.422779 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:05.422937 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:05.425616 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:05.425643 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:05.425654 | orchestrator | 2025-05-19 21:43:05.425689 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-19 21:43:05.425703 | orchestrator | Monday 19 May 2025 21:43:05 +0000 (0:00:01.118) 0:05:46.783 ************ 2025-05-19 21:43:14.076480 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:14.078279 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:14.079141 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:14.081303 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:14.081702 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:14.082792 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:14.083462 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:14.084241 | orchestrator | 2025-05-19 21:43:14.084843 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-19 21:43:14.085286 | orchestrator | Monday 19 May 2025 21:43:14 +0000 (0:00:08.654) 0:05:55.437 ************ 2025-05-19 21:43:24.767392 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:24.767510 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:24.767523 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:24.769348 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:24.769959 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:24.770997 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:24.772111 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:24.773255 | orchestrator | 2025-05-19 21:43:24.774063 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-19 21:43:24.775616 | orchestrator | Monday 19 May 2025 21:43:24 +0000 (0:00:10.689) 0:06:06.127 ************ 2025-05-19 21:43:25.188439 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-19 21:43:25.954963 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-19 21:43:25.955076 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-19 21:43:25.958428 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-19 21:43:25.959490 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-19 21:43:25.959513 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-19 21:43:25.959778 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-19 21:43:25.960462 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-19 21:43:25.960484 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-19 21:43:25.963770 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-19 21:43:25.963796 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-19 21:43:25.964475 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-19 21:43:25.964670 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-19 21:43:25.965574 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-19 21:43:25.967198 | orchestrator | 2025-05-19 21:43:25.967223 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-19 21:43:25.967238 | orchestrator | Monday 19 May 2025 21:43:25 +0000 (0:00:01.189) 0:06:07.316 ************ 2025-05-19 21:43:26.108855 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:26.180238 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:26.252724 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:26.316732 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:26.385698 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:26.502224 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:26.502325 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:26.502926 | orchestrator | 2025-05-19 21:43:26.503456 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-19 21:43:26.504203 | orchestrator | Monday 19 May 2025 21:43:26 +0000 (0:00:00.547) 0:06:07.864 ************ 2025-05-19 21:43:30.335148 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:30.335507 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:30.336264 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:30.337956 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:30.339622 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:30.340216 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:30.340646 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:30.341277 | orchestrator | 2025-05-19 21:43:30.341852 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-19 21:43:30.342403 | orchestrator | Monday 19 May 2025 21:43:30 +0000 (0:00:03.831) 0:06:11.696 ************ 2025-05-19 21:43:30.473019 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:30.533813 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:30.603605 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:30.667065 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:30.733022 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:30.843447 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:30.843731 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:30.844347 | orchestrator | 2025-05-19 21:43:30.844821 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-19 21:43:30.845402 | orchestrator | Monday 19 May 2025 21:43:30 +0000 (0:00:00.512) 0:06:12.208 ************ 2025-05-19 21:43:30.911249 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-19 21:43:30.911734 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-19 21:43:30.976891 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:30.977479 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-19 21:43:30.978563 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-19 21:43:31.047216 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:31.047717 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-19 21:43:31.049009 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-19 21:43:31.115442 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:31.115769 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-19 21:43:31.116827 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-19 21:43:31.180026 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:31.180872 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-19 21:43:31.182159 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-19 21:43:31.252710 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:31.253307 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-19 21:43:31.254127 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-19 21:43:31.366181 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:31.366674 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-19 21:43:31.367801 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-19 21:43:31.369601 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:31.370763 | orchestrator | 2025-05-19 21:43:31.371341 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-19 21:43:31.372140 | orchestrator | Monday 19 May 2025 21:43:31 +0000 (0:00:00.520) 0:06:12.728 ************ 2025-05-19 21:43:31.509012 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:31.570836 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:31.632682 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:31.700191 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:31.767605 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:31.873790 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:31.874235 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:31.874947 | orchestrator | 2025-05-19 21:43:31.875910 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-19 21:43:31.877191 | orchestrator | Monday 19 May 2025 21:43:31 +0000 (0:00:00.507) 0:06:13.236 ************ 2025-05-19 21:43:32.015814 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:32.076231 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:32.142893 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:32.204785 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:32.265169 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:32.368811 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:32.369200 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:32.369988 | orchestrator | 2025-05-19 21:43:32.370589 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-19 21:43:32.371308 | orchestrator | Monday 19 May 2025 21:43:32 +0000 (0:00:00.496) 0:06:13.733 ************ 2025-05-19 21:43:32.519905 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:32.597969 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:32.822467 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:32.896755 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:32.962214 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:33.083400 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:33.084077 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:33.085638 | orchestrator | 2025-05-19 21:43:33.086103 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-19 21:43:33.086684 | orchestrator | Monday 19 May 2025 21:43:33 +0000 (0:00:00.712) 0:06:14.446 ************ 2025-05-19 21:43:34.766806 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:34.769557 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:34.770206 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:34.771032 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:34.772484 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:34.772513 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:34.772659 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:34.773913 | orchestrator | 2025-05-19 21:43:34.774155 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-19 21:43:34.774490 | orchestrator | Monday 19 May 2025 21:43:34 +0000 (0:00:01.682) 0:06:16.128 ************ 2025-05-19 21:43:35.606429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:43:35.606666 | orchestrator | 2025-05-19 21:43:35.607685 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-19 21:43:35.608556 | orchestrator | Monday 19 May 2025 21:43:35 +0000 (0:00:00.840) 0:06:16.969 ************ 2025-05-19 21:43:36.004876 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:36.431678 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:36.432204 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:36.432842 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:36.433997 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:36.434609 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:36.435066 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:36.436106 | orchestrator | 2025-05-19 21:43:36.436316 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-19 21:43:36.436683 | orchestrator | Monday 19 May 2025 21:43:36 +0000 (0:00:00.824) 0:06:17.793 ************ 2025-05-19 21:43:36.873436 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:37.572077 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:37.572511 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:37.573480 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:37.574415 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:37.575183 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:37.575854 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:37.576842 | orchestrator | 2025-05-19 21:43:37.577554 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-19 21:43:37.577929 | orchestrator | Monday 19 May 2025 21:43:37 +0000 (0:00:01.142) 0:06:18.936 ************ 2025-05-19 21:43:38.908585 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:38.908854 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:38.910064 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:38.911074 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:38.911875 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:38.912366 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:38.914055 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:38.914635 | orchestrator | 2025-05-19 21:43:38.915247 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-19 21:43:38.915846 | orchestrator | Monday 19 May 2025 21:43:38 +0000 (0:00:01.332) 0:06:20.269 ************ 2025-05-19 21:43:39.041048 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:40.264069 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:40.264176 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:40.264190 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:40.266371 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:40.267937 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:40.269763 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:40.271004 | orchestrator | 2025-05-19 21:43:40.272275 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-19 21:43:40.273335 | orchestrator | Monday 19 May 2025 21:43:40 +0000 (0:00:01.351) 0:06:21.621 ************ 2025-05-19 21:43:41.613339 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:41.613583 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:41.613661 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:41.614471 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:41.616573 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:41.616796 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:41.617458 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:41.618450 | orchestrator | 2025-05-19 21:43:41.619096 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-19 21:43:41.619784 | orchestrator | Monday 19 May 2025 21:43:41 +0000 (0:00:01.353) 0:06:22.974 ************ 2025-05-19 21:43:43.238815 | orchestrator | changed: [testbed-manager] 2025-05-19 21:43:43.238948 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:43.239796 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:43.242295 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:43.242486 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:43.243393 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:43.247352 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:43.247411 | orchestrator | 2025-05-19 21:43:43.247435 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-19 21:43:43.247457 | orchestrator | Monday 19 May 2025 21:43:43 +0000 (0:00:01.625) 0:06:24.600 ************ 2025-05-19 21:43:44.100746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:43:44.101281 | orchestrator | 2025-05-19 21:43:44.102185 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-19 21:43:44.103499 | orchestrator | Monday 19 May 2025 21:43:44 +0000 (0:00:00.862) 0:06:25.462 ************ 2025-05-19 21:43:45.431500 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:45.432774 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:45.434785 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:45.435845 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:45.436492 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:45.436920 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:45.437967 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:45.438698 | orchestrator | 2025-05-19 21:43:45.439753 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-19 21:43:45.440837 | orchestrator | Monday 19 May 2025 21:43:45 +0000 (0:00:01.329) 0:06:26.792 ************ 2025-05-19 21:43:46.547007 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:46.548492 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:46.548558 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:46.549433 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:46.550301 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:46.550923 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:46.551650 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:46.552129 | orchestrator | 2025-05-19 21:43:46.552730 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-19 21:43:46.553292 | orchestrator | Monday 19 May 2025 21:43:46 +0000 (0:00:01.115) 0:06:27.908 ************ 2025-05-19 21:43:47.872292 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:47.873739 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:47.875475 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:47.876470 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:47.877175 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:47.877720 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:47.878786 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:47.879150 | orchestrator | 2025-05-19 21:43:47.879872 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-19 21:43:47.880175 | orchestrator | Monday 19 May 2025 21:43:47 +0000 (0:00:01.324) 0:06:29.233 ************ 2025-05-19 21:43:48.985810 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:48.986709 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:48.987112 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:48.987359 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:48.988105 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:48.989179 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:48.990012 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:48.990617 | orchestrator | 2025-05-19 21:43:48.991505 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-19 21:43:48.992051 | orchestrator | Monday 19 May 2025 21:43:48 +0000 (0:00:01.110) 0:06:30.344 ************ 2025-05-19 21:43:50.279035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:43:50.280339 | orchestrator | 2025-05-19 21:43:50.280375 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.281031 | orchestrator | Monday 19 May 2025 21:43:49 +0000 (0:00:00.852) 0:06:31.196 ************ 2025-05-19 21:43:50.281734 | orchestrator | 2025-05-19 21:43:50.282657 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.283538 | orchestrator | Monday 19 May 2025 21:43:49 +0000 (0:00:00.043) 0:06:31.239 ************ 2025-05-19 21:43:50.284220 | orchestrator | 2025-05-19 21:43:50.284760 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.285243 | orchestrator | Monday 19 May 2025 21:43:49 +0000 (0:00:00.039) 0:06:31.279 ************ 2025-05-19 21:43:50.286217 | orchestrator | 2025-05-19 21:43:50.287314 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.288542 | orchestrator | Monday 19 May 2025 21:43:49 +0000 (0:00:00.037) 0:06:31.317 ************ 2025-05-19 21:43:50.288563 | orchestrator | 2025-05-19 21:43:50.288902 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.289349 | orchestrator | Monday 19 May 2025 21:43:49 +0000 (0:00:00.047) 0:06:31.364 ************ 2025-05-19 21:43:50.289657 | orchestrator | 2025-05-19 21:43:50.291398 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.292041 | orchestrator | Monday 19 May 2025 21:43:50 +0000 (0:00:00.037) 0:06:31.402 ************ 2025-05-19 21:43:50.292481 | orchestrator | 2025-05-19 21:43:50.293899 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-19 21:43:50.294469 | orchestrator | Monday 19 May 2025 21:43:50 +0000 (0:00:00.037) 0:06:31.439 ************ 2025-05-19 21:43:50.295498 | orchestrator | 2025-05-19 21:43:50.295758 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-19 21:43:50.297260 | orchestrator | Monday 19 May 2025 21:43:50 +0000 (0:00:00.200) 0:06:31.640 ************ 2025-05-19 21:43:51.384819 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:51.384928 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:51.384943 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:51.385605 | orchestrator | 2025-05-19 21:43:51.385956 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-19 21:43:51.386176 | orchestrator | Monday 19 May 2025 21:43:51 +0000 (0:00:01.106) 0:06:32.746 ************ 2025-05-19 21:43:52.881148 | orchestrator | changed: [testbed-manager] 2025-05-19 21:43:52.882316 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:52.884286 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:52.886369 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:52.887277 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:52.888484 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:52.889464 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:52.890859 | orchestrator | 2025-05-19 21:43:52.892076 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-19 21:43:52.892457 | orchestrator | Monday 19 May 2025 21:43:52 +0000 (0:00:01.496) 0:06:34.243 ************ 2025-05-19 21:43:53.986965 | orchestrator | changed: [testbed-manager] 2025-05-19 21:43:53.987190 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:53.988058 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:53.990457 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:53.991756 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:53.992386 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:53.993413 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:53.993759 | orchestrator | 2025-05-19 21:43:53.995643 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-19 21:43:53.996274 | orchestrator | Monday 19 May 2025 21:43:53 +0000 (0:00:01.103) 0:06:35.346 ************ 2025-05-19 21:43:54.111137 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:56.216735 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:56.216873 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:56.216900 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:56.218334 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:56.219020 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:56.220647 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:56.220686 | orchestrator | 2025-05-19 21:43:56.221228 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-19 21:43:56.222448 | orchestrator | Monday 19 May 2025 21:43:56 +0000 (0:00:02.226) 0:06:37.573 ************ 2025-05-19 21:43:56.308155 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:56.308256 | orchestrator | 2025-05-19 21:43:56.308330 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-19 21:43:56.309711 | orchestrator | Monday 19 May 2025 21:43:56 +0000 (0:00:00.096) 0:06:37.670 ************ 2025-05-19 21:43:57.538609 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:57.539848 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:43:57.540366 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:43:57.541770 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:43:57.543256 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:43:57.544275 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:43:57.545696 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:43:57.546286 | orchestrator | 2025-05-19 21:43:57.546868 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-19 21:43:57.547936 | orchestrator | Monday 19 May 2025 21:43:57 +0000 (0:00:01.229) 0:06:38.899 ************ 2025-05-19 21:43:57.751098 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:43:57.817352 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:43:57.886853 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:43:57.953862 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:43:58.071784 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:43:58.072591 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:43:58.073704 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:43:58.075285 | orchestrator | 2025-05-19 21:43:58.075570 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-19 21:43:58.076862 | orchestrator | Monday 19 May 2025 21:43:58 +0000 (0:00:00.534) 0:06:39.434 ************ 2025-05-19 21:43:58.910982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:43:58.911757 | orchestrator | 2025-05-19 21:43:58.912870 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-19 21:43:58.914653 | orchestrator | Monday 19 May 2025 21:43:58 +0000 (0:00:00.838) 0:06:40.272 ************ 2025-05-19 21:43:59.757997 | orchestrator | ok: [testbed-manager] 2025-05-19 21:43:59.758177 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:43:59.759144 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:43:59.760324 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:43:59.761001 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:43:59.761681 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:43:59.762455 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:43:59.763349 | orchestrator | 2025-05-19 21:43:59.764155 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-19 21:43:59.764658 | orchestrator | Monday 19 May 2025 21:43:59 +0000 (0:00:00.845) 0:06:41.118 ************ 2025-05-19 21:44:02.402818 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-19 21:44:02.403252 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-19 21:44:02.404821 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-19 21:44:02.406198 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-19 21:44:02.407195 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-19 21:44:02.408583 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-19 21:44:02.409406 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-19 21:44:02.410693 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-19 21:44:02.411271 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-19 21:44:02.411683 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-19 21:44:02.412815 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-19 21:44:02.413589 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-19 21:44:02.413907 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-19 21:44:02.414616 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-19 21:44:02.415118 | orchestrator | 2025-05-19 21:44:02.415875 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-19 21:44:02.416320 | orchestrator | Monday 19 May 2025 21:44:02 +0000 (0:00:02.644) 0:06:43.762 ************ 2025-05-19 21:44:02.537776 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:02.613741 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:02.678896 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:02.743013 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:02.811165 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:02.906081 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:02.907240 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:02.908059 | orchestrator | 2025-05-19 21:44:02.908928 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-19 21:44:02.910366 | orchestrator | Monday 19 May 2025 21:44:02 +0000 (0:00:00.504) 0:06:44.267 ************ 2025-05-19 21:44:03.690235 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:44:03.691095 | orchestrator | 2025-05-19 21:44:03.694352 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-19 21:44:03.694380 | orchestrator | Monday 19 May 2025 21:44:03 +0000 (0:00:00.783) 0:06:45.050 ************ 2025-05-19 21:44:04.202709 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:04.269437 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:04.352405 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:04.793380 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:04.793608 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:04.794148 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:04.794885 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:04.796579 | orchestrator | 2025-05-19 21:44:04.796606 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-19 21:44:04.797649 | orchestrator | Monday 19 May 2025 21:44:04 +0000 (0:00:01.097) 0:06:46.148 ************ 2025-05-19 21:44:05.211086 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:05.610257 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:05.610819 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:05.611824 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:05.613067 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:05.613310 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:05.613934 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:05.614473 | orchestrator | 2025-05-19 21:44:05.615204 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-19 21:44:05.615586 | orchestrator | Monday 19 May 2025 21:44:05 +0000 (0:00:00.819) 0:06:46.967 ************ 2025-05-19 21:44:05.759337 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:05.819708 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:05.925241 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:05.995968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:06.064170 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:06.172456 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:06.173384 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:06.174765 | orchestrator | 2025-05-19 21:44:06.176531 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-19 21:44:06.177781 | orchestrator | Monday 19 May 2025 21:44:06 +0000 (0:00:00.566) 0:06:47.534 ************ 2025-05-19 21:44:07.609018 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:07.620821 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:07.622140 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:07.623055 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:07.624114 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:07.625421 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:07.626627 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:07.626902 | orchestrator | 2025-05-19 21:44:07.628868 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-19 21:44:07.629358 | orchestrator | Monday 19 May 2025 21:44:07 +0000 (0:00:01.426) 0:06:48.960 ************ 2025-05-19 21:44:07.775125 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:07.845862 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:07.911026 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:07.977100 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:08.040936 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:08.137915 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:08.140894 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:08.141925 | orchestrator | 2025-05-19 21:44:08.143075 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-19 21:44:08.144157 | orchestrator | Monday 19 May 2025 21:44:08 +0000 (0:00:00.538) 0:06:49.499 ************ 2025-05-19 21:44:15.679349 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:15.679479 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:15.679538 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:15.679614 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:15.680980 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:15.682002 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:15.683711 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:15.686065 | orchestrator | 2025-05-19 21:44:15.687569 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-19 21:44:15.688294 | orchestrator | Monday 19 May 2025 21:44:15 +0000 (0:00:07.541) 0:06:57.040 ************ 2025-05-19 21:44:16.997724 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:16.997842 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:16.999452 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:17.000185 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:17.000862 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:17.001570 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:17.001947 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:17.002649 | orchestrator | 2025-05-19 21:44:17.003741 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-19 21:44:17.004024 | orchestrator | Monday 19 May 2025 21:44:16 +0000 (0:00:01.317) 0:06:58.358 ************ 2025-05-19 21:44:18.714220 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:18.714329 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:18.714345 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:18.714357 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:18.714368 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:18.714439 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:18.714665 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:18.715093 | orchestrator | 2025-05-19 21:44:18.715744 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-19 21:44:18.716141 | orchestrator | Monday 19 May 2025 21:44:18 +0000 (0:00:01.713) 0:07:00.072 ************ 2025-05-19 21:44:20.508291 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:20.508828 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:20.510608 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:20.511453 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:20.514387 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:20.514410 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:20.514418 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:20.514424 | orchestrator | 2025-05-19 21:44:20.515054 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 21:44:20.515918 | orchestrator | Monday 19 May 2025 21:44:20 +0000 (0:00:01.797) 0:07:01.869 ************ 2025-05-19 21:44:20.943087 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:21.373012 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:21.375116 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:21.377912 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:21.379025 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:21.379603 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:21.380336 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:21.380733 | orchestrator | 2025-05-19 21:44:21.381591 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 21:44:21.382317 | orchestrator | Monday 19 May 2025 21:44:21 +0000 (0:00:00.865) 0:07:02.735 ************ 2025-05-19 21:44:21.500050 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:21.562278 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:21.627953 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:21.690064 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:21.752279 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:22.134105 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:22.135216 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:22.135296 | orchestrator | 2025-05-19 21:44:22.136345 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-19 21:44:22.136826 | orchestrator | Monday 19 May 2025 21:44:22 +0000 (0:00:00.763) 0:07:03.498 ************ 2025-05-19 21:44:22.290952 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:22.349876 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:22.410975 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:22.480395 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:22.544383 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:22.638243 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:22.639314 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:22.640224 | orchestrator | 2025-05-19 21:44:22.640953 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-19 21:44:22.644458 | orchestrator | Monday 19 May 2025 21:44:22 +0000 (0:00:00.502) 0:07:04.001 ************ 2025-05-19 21:44:22.768217 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:22.832356 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:22.898228 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:23.135472 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:23.201917 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:23.304736 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:23.305209 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:23.305739 | orchestrator | 2025-05-19 21:44:23.306531 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-19 21:44:23.307051 | orchestrator | Monday 19 May 2025 21:44:23 +0000 (0:00:00.666) 0:07:04.668 ************ 2025-05-19 21:44:23.442444 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:23.518167 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:23.591670 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:23.649667 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:23.721606 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:23.846326 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:23.846448 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:23.847081 | orchestrator | 2025-05-19 21:44:23.847457 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-19 21:44:23.848037 | orchestrator | Monday 19 May 2025 21:44:23 +0000 (0:00:00.543) 0:07:05.211 ************ 2025-05-19 21:44:23.976325 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:24.061230 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:24.126369 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:24.189204 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:24.257247 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:24.371179 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:24.371917 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:24.372873 | orchestrator | 2025-05-19 21:44:24.373730 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-19 21:44:24.380202 | orchestrator | Monday 19 May 2025 21:44:24 +0000 (0:00:00.522) 0:07:05.733 ************ 2025-05-19 21:44:30.035064 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:30.035432 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:30.036807 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:30.037377 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:30.038429 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:30.038644 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:30.039603 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:30.039922 | orchestrator | 2025-05-19 21:44:30.040638 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-19 21:44:30.041051 | orchestrator | Monday 19 May 2025 21:44:30 +0000 (0:00:05.664) 0:07:11.397 ************ 2025-05-19 21:44:30.244705 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:30.315979 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:30.376970 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:30.436183 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:30.720904 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:30.721308 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:30.722236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:30.723564 | orchestrator | 2025-05-19 21:44:30.724669 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-19 21:44:30.725125 | orchestrator | Monday 19 May 2025 21:44:30 +0000 (0:00:00.684) 0:07:12.082 ************ 2025-05-19 21:44:31.493003 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:44:31.493264 | orchestrator | 2025-05-19 21:44:31.493943 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-19 21:44:31.494613 | orchestrator | Monday 19 May 2025 21:44:31 +0000 (0:00:00.772) 0:07:12.855 ************ 2025-05-19 21:44:33.266124 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:33.266246 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:33.266653 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:33.267688 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:33.267716 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:33.268056 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:33.269466 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:33.273391 | orchestrator | 2025-05-19 21:44:33.274347 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-19 21:44:33.277373 | orchestrator | Monday 19 May 2025 21:44:33 +0000 (0:00:01.771) 0:07:14.627 ************ 2025-05-19 21:44:34.363038 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:34.363151 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:34.364069 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:34.367291 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:34.368220 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:34.369198 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:34.370208 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:34.370925 | orchestrator | 2025-05-19 21:44:34.371608 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-19 21:44:34.372534 | orchestrator | Monday 19 May 2025 21:44:34 +0000 (0:00:01.096) 0:07:15.724 ************ 2025-05-19 21:44:34.970699 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:35.384028 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:35.384953 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:35.386831 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:35.387048 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:35.388602 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:35.389900 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:35.390712 | orchestrator | 2025-05-19 21:44:35.391548 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-19 21:44:35.392130 | orchestrator | Monday 19 May 2025 21:44:35 +0000 (0:00:01.021) 0:07:16.745 ************ 2025-05-19 21:44:37.103584 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.106517 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.106574 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.106586 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.106671 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.107908 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.108738 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-19 21:44:37.109641 | orchestrator | 2025-05-19 21:44:37.110251 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-19 21:44:37.110895 | orchestrator | Monday 19 May 2025 21:44:37 +0000 (0:00:01.718) 0:07:18.464 ************ 2025-05-19 21:44:37.866889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:44:37.867512 | orchestrator | 2025-05-19 21:44:37.868316 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-19 21:44:37.871265 | orchestrator | Monday 19 May 2025 21:44:37 +0000 (0:00:00.762) 0:07:19.227 ************ 2025-05-19 21:44:46.725763 | orchestrator | changed: [testbed-manager] 2025-05-19 21:44:46.726557 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:46.728726 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:46.729956 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:46.731265 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:46.732582 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:46.735008 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:46.736212 | orchestrator | 2025-05-19 21:44:46.737183 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-19 21:44:46.738181 | orchestrator | Monday 19 May 2025 21:44:46 +0000 (0:00:08.855) 0:07:28.083 ************ 2025-05-19 21:44:48.433805 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:48.439383 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:48.439442 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:48.440826 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:48.442078 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:48.442979 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:48.444138 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:48.445065 | orchestrator | 2025-05-19 21:44:48.445745 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-19 21:44:48.446620 | orchestrator | Monday 19 May 2025 21:44:48 +0000 (0:00:01.711) 0:07:29.794 ************ 2025-05-19 21:44:49.795956 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:49.796227 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:49.799584 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:49.799649 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:49.800373 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:49.802095 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:49.803878 | orchestrator | 2025-05-19 21:44:49.804338 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-19 21:44:49.805563 | orchestrator | Monday 19 May 2025 21:44:49 +0000 (0:00:01.364) 0:07:31.159 ************ 2025-05-19 21:44:51.241357 | orchestrator | changed: [testbed-manager] 2025-05-19 21:44:51.241986 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:51.242956 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:51.245138 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:51.245159 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:51.246543 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:51.247550 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:51.248255 | orchestrator | 2025-05-19 21:44:51.249320 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-19 21:44:51.249646 | orchestrator | 2025-05-19 21:44:51.252472 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-19 21:44:51.252543 | orchestrator | Monday 19 May 2025 21:44:51 +0000 (0:00:01.446) 0:07:32.605 ************ 2025-05-19 21:44:51.363981 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:51.437008 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:51.498810 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:51.556305 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:51.620396 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:51.740980 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:51.742171 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:51.743281 | orchestrator | 2025-05-19 21:44:51.747358 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-19 21:44:51.747748 | orchestrator | 2025-05-19 21:44:51.749036 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-19 21:44:51.749374 | orchestrator | Monday 19 May 2025 21:44:51 +0000 (0:00:00.500) 0:07:33.105 ************ 2025-05-19 21:44:53.067925 | orchestrator | changed: [testbed-manager] 2025-05-19 21:44:53.068623 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:53.070438 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:53.070610 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:53.070723 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:53.072050 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:53.072571 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:53.072944 | orchestrator | 2025-05-19 21:44:53.073459 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-19 21:44:53.073972 | orchestrator | Monday 19 May 2025 21:44:53 +0000 (0:00:01.323) 0:07:34.429 ************ 2025-05-19 21:44:54.603947 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:54.604061 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:54.604329 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:54.605717 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:54.609634 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:54.610251 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:54.611291 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:54.611895 | orchestrator | 2025-05-19 21:44:54.612386 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-19 21:44:54.613179 | orchestrator | Monday 19 May 2025 21:44:54 +0000 (0:00:01.536) 0:07:35.965 ************ 2025-05-19 21:44:54.741736 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:44:54.814338 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:44:54.879149 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:44:54.946382 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:44:55.010590 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:44:55.392609 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:44:55.392785 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:44:55.397726 | orchestrator | 2025-05-19 21:44:55.397771 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-19 21:44:55.397786 | orchestrator | Monday 19 May 2025 21:44:55 +0000 (0:00:00.788) 0:07:36.754 ************ 2025-05-19 21:44:56.652262 | orchestrator | changed: [testbed-manager] 2025-05-19 21:44:56.652965 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:56.653221 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:56.653429 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:56.654189 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:56.655137 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:56.656189 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:56.656621 | orchestrator | 2025-05-19 21:44:56.656692 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-19 21:44:56.657438 | orchestrator | 2025-05-19 21:44:56.658501 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-19 21:44:56.660106 | orchestrator | Monday 19 May 2025 21:44:56 +0000 (0:00:01.255) 0:07:38.009 ************ 2025-05-19 21:44:57.588888 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:44:57.589002 | orchestrator | 2025-05-19 21:44:57.589250 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 21:44:57.591683 | orchestrator | Monday 19 May 2025 21:44:57 +0000 (0:00:00.938) 0:07:38.948 ************ 2025-05-19 21:44:58.025566 | orchestrator | ok: [testbed-manager] 2025-05-19 21:44:58.423839 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:44:58.424032 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:44:58.426520 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:44:58.427339 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:44:58.428400 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:44:58.429384 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:44:58.430355 | orchestrator | 2025-05-19 21:44:58.432297 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 21:44:58.433577 | orchestrator | Monday 19 May 2025 21:44:58 +0000 (0:00:00.835) 0:07:39.784 ************ 2025-05-19 21:44:59.550125 | orchestrator | changed: [testbed-manager] 2025-05-19 21:44:59.550680 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:44:59.553271 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:44:59.553298 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:44:59.553310 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:44:59.553362 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:44:59.554099 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:44:59.554624 | orchestrator | 2025-05-19 21:44:59.555429 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-19 21:44:59.556101 | orchestrator | Monday 19 May 2025 21:44:59 +0000 (0:00:01.126) 0:07:40.910 ************ 2025-05-19 21:45:00.481896 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:45:00.482114 | orchestrator | 2025-05-19 21:45:00.482873 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-19 21:45:00.483377 | orchestrator | Monday 19 May 2025 21:45:00 +0000 (0:00:00.934) 0:07:41.844 ************ 2025-05-19 21:45:00.884597 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:01.316661 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:01.317380 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:01.317832 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:01.318747 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:01.319652 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:01.319676 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:01.320324 | orchestrator | 2025-05-19 21:45:01.320959 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-19 21:45:01.321579 | orchestrator | Monday 19 May 2025 21:45:01 +0000 (0:00:00.835) 0:07:42.679 ************ 2025-05-19 21:45:01.719061 | orchestrator | changed: [testbed-manager] 2025-05-19 21:45:02.385614 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:45:02.386402 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:45:02.387280 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:45:02.388526 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:45:02.388792 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:45:02.390004 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:45:02.390884 | orchestrator | 2025-05-19 21:45:02.391885 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:45:02.392411 | orchestrator | 2025-05-19 21:45:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:45:02.393404 | orchestrator | 2025-05-19 21:45:02 | INFO  | Please wait and do not abort execution. 2025-05-19 21:45:02.393733 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-19 21:45:02.395792 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 21:45:02.396728 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 21:45:02.397709 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 21:45:02.398698 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-19 21:45:02.399233 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 21:45:02.400247 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-19 21:45:02.400958 | orchestrator | 2025-05-19 21:45:02.402498 | orchestrator | 2025-05-19 21:45:02.402705 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:45:02.403803 | orchestrator | Monday 19 May 2025 21:45:02 +0000 (0:00:01.068) 0:07:43.748 ************ 2025-05-19 21:45:02.405120 | orchestrator | =============================================================================== 2025-05-19 21:45:02.406314 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.08s 2025-05-19 21:45:02.406629 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.58s 2025-05-19 21:45:02.407954 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.45s 2025-05-19 21:45:02.408570 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.94s 2025-05-19 21:45:02.409686 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.18s 2025-05-19 21:45:02.410741 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.08s 2025-05-19 21:45:02.412388 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.69s 2025-05-19 21:45:02.412869 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.85s 2025-05-19 21:45:02.413830 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.86s 2025-05-19 21:45:02.414760 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.65s 2025-05-19 21:45:02.415789 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.34s 2025-05-19 21:45:02.417135 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.92s 2025-05-19 21:45:02.417747 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.88s 2025-05-19 21:45:02.418705 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.72s 2025-05-19 21:45:02.419827 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.54s 2025-05-19 21:45:02.420965 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.40s 2025-05-19 21:45:02.421193 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.28s 2025-05-19 21:45:02.422808 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2025-05-19 21:45:02.423431 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.69s 2025-05-19 21:45:02.424617 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.66s 2025-05-19 21:45:03.038342 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-19 21:45:03.038456 | orchestrator | + osism apply network 2025-05-19 21:45:04.953714 | orchestrator | 2025-05-19 21:45:04 | INFO  | Task d3f8b575-c185-4d33-951e-952accccd688 (network) was prepared for execution. 2025-05-19 21:45:04.953824 | orchestrator | 2025-05-19 21:45:04 | INFO  | It takes a moment until task d3f8b575-c185-4d33-951e-952accccd688 (network) has been started and output is visible here. 2025-05-19 21:45:09.052975 | orchestrator | 2025-05-19 21:45:09.053099 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-19 21:45:09.053748 | orchestrator | 2025-05-19 21:45:09.054365 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-19 21:45:09.055234 | orchestrator | Monday 19 May 2025 21:45:09 +0000 (0:00:00.255) 0:00:00.255 ************ 2025-05-19 21:45:09.190897 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:09.265879 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:09.340426 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:09.400508 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:09.515640 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:09.605344 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:09.608227 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:09.608248 | orchestrator | 2025-05-19 21:45:09.609015 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-19 21:45:09.609142 | orchestrator | Monday 19 May 2025 21:45:09 +0000 (0:00:00.558) 0:00:00.814 ************ 2025-05-19 21:45:10.453915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:45:10.454065 | orchestrator | 2025-05-19 21:45:10.454083 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-19 21:45:10.454546 | orchestrator | Monday 19 May 2025 21:45:10 +0000 (0:00:00.845) 0:00:01.659 ************ 2025-05-19 21:45:12.295160 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:12.295262 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:12.295278 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:12.296791 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:12.297711 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:12.298736 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:12.299423 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:12.301309 | orchestrator | 2025-05-19 21:45:12.301335 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-19 21:45:12.302185 | orchestrator | Monday 19 May 2025 21:45:12 +0000 (0:00:01.841) 0:00:03.501 ************ 2025-05-19 21:45:13.895364 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:13.895433 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:13.895920 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:13.896835 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:13.899349 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:13.899938 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:13.900336 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:13.901011 | orchestrator | 2025-05-19 21:45:13.901443 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-19 21:45:13.901974 | orchestrator | Monday 19 May 2025 21:45:13 +0000 (0:00:01.599) 0:00:05.100 ************ 2025-05-19 21:45:14.344856 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-19 21:45:14.772611 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-19 21:45:14.773674 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-19 21:45:14.775067 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-19 21:45:14.776208 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-19 21:45:14.777276 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-19 21:45:14.778228 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-19 21:45:14.779672 | orchestrator | 2025-05-19 21:45:14.780457 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-19 21:45:14.781312 | orchestrator | Monday 19 May 2025 21:45:14 +0000 (0:00:00.882) 0:00:05.983 ************ 2025-05-19 21:45:17.875351 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 21:45:17.876413 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 21:45:17.878608 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 21:45:17.878939 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:45:17.880092 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 21:45:17.881937 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 21:45:17.883089 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 21:45:17.883921 | orchestrator | 2025-05-19 21:45:17.884811 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-19 21:45:17.885591 | orchestrator | Monday 19 May 2025 21:45:17 +0000 (0:00:03.100) 0:00:09.084 ************ 2025-05-19 21:45:19.314979 | orchestrator | changed: [testbed-manager] 2025-05-19 21:45:19.315545 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:45:19.316858 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:45:19.318135 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:45:19.319054 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:45:19.319917 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:45:19.320616 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:45:19.321277 | orchestrator | 2025-05-19 21:45:19.322277 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-19 21:45:19.323084 | orchestrator | Monday 19 May 2025 21:45:19 +0000 (0:00:01.437) 0:00:10.521 ************ 2025-05-19 21:45:20.811925 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:45:20.812561 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 21:45:20.813539 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 21:45:20.814895 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 21:45:20.815538 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 21:45:20.816591 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 21:45:20.817176 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 21:45:20.817771 | orchestrator | 2025-05-19 21:45:20.818799 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-19 21:45:20.819271 | orchestrator | Monday 19 May 2025 21:45:20 +0000 (0:00:01.500) 0:00:12.021 ************ 2025-05-19 21:45:21.177827 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:21.787141 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:21.787619 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:21.788636 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:21.789425 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:21.790388 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:21.791683 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:21.792032 | orchestrator | 2025-05-19 21:45:21.793294 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-19 21:45:21.793748 | orchestrator | Monday 19 May 2025 21:45:21 +0000 (0:00:00.973) 0:00:12.995 ************ 2025-05-19 21:45:21.936867 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:45:22.013286 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:22.089068 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:22.161659 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:22.235769 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:22.375674 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:22.376053 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:22.379614 | orchestrator | 2025-05-19 21:45:22.379680 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-19 21:45:22.379694 | orchestrator | Monday 19 May 2025 21:45:22 +0000 (0:00:00.588) 0:00:13.584 ************ 2025-05-19 21:45:24.565736 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:24.565931 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:24.568360 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:24.568492 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:24.569357 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:24.571513 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:24.571569 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:24.572081 | orchestrator | 2025-05-19 21:45:24.572577 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-19 21:45:24.573433 | orchestrator | Monday 19 May 2025 21:45:24 +0000 (0:00:02.185) 0:00:15.769 ************ 2025-05-19 21:45:24.812041 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:24.894593 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:24.976973 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:25.061921 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:25.426437 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:25.426635 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:25.427079 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-19 21:45:25.427635 | orchestrator | 2025-05-19 21:45:25.428423 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-19 21:45:25.428797 | orchestrator | Monday 19 May 2025 21:45:25 +0000 (0:00:00.867) 0:00:16.636 ************ 2025-05-19 21:45:27.048891 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:27.049571 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:45:27.050665 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:45:27.051350 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:45:27.053551 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:45:27.054450 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:45:27.055170 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:45:27.055710 | orchestrator | 2025-05-19 21:45:27.056509 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-19 21:45:27.057314 | orchestrator | Monday 19 May 2025 21:45:27 +0000 (0:00:01.612) 0:00:18.249 ************ 2025-05-19 21:45:28.291749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:45:28.291856 | orchestrator | 2025-05-19 21:45:28.295481 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-19 21:45:28.295981 | orchestrator | Monday 19 May 2025 21:45:28 +0000 (0:00:01.247) 0:00:19.496 ************ 2025-05-19 21:45:28.986611 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:29.415334 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:29.417378 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:29.419734 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:29.420667 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:29.421498 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:29.422379 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:29.426691 | orchestrator | 2025-05-19 21:45:29.427209 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-19 21:45:29.427638 | orchestrator | Monday 19 May 2025 21:45:29 +0000 (0:00:01.117) 0:00:20.613 ************ 2025-05-19 21:45:29.572341 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:29.655257 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:29.740421 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:29.822634 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:29.903583 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:30.048965 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:30.049708 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:30.050930 | orchestrator | 2025-05-19 21:45:30.051671 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-19 21:45:30.052914 | orchestrator | Monday 19 May 2025 21:45:30 +0000 (0:00:00.645) 0:00:21.258 ************ 2025-05-19 21:45:30.387011 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:30.685054 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:30.685219 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:30.685905 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:30.686247 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:30.687002 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:30.687548 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:30.688277 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:30.782632 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:30.782819 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:31.272019 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:31.272222 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:31.273300 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-19 21:45:31.274417 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-19 21:45:31.274786 | orchestrator | 2025-05-19 21:45:31.275497 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-19 21:45:31.275892 | orchestrator | Monday 19 May 2025 21:45:31 +0000 (0:00:01.218) 0:00:22.476 ************ 2025-05-19 21:45:31.433374 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:45:31.515149 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:31.592964 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:31.669704 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:31.751076 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:31.864542 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:31.865113 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:31.866573 | orchestrator | 2025-05-19 21:45:31.867503 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-19 21:45:31.868858 | orchestrator | Monday 19 May 2025 21:45:31 +0000 (0:00:00.597) 0:00:23.074 ************ 2025-05-19 21:45:35.295715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-05-19 21:45:35.296329 | orchestrator | 2025-05-19 21:45:35.296577 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-19 21:45:35.297243 | orchestrator | Monday 19 May 2025 21:45:35 +0000 (0:00:03.428) 0:00:26.503 ************ 2025-05-19 21:45:39.433388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.433553 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.434927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.440417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.441081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.441745 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.442535 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.445259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.446007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:39.446510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.447132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.448207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.448688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.450375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:39.453480 | orchestrator | 2025-05-19 21:45:39.454184 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-19 21:45:39.454409 | orchestrator | Monday 19 May 2025 21:45:39 +0000 (0:00:04.137) 0:00:30.640 ************ 2025-05-19 21:45:43.870276 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.871095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.872415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.873477 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.876212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.877506 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.879018 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.879667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-19 21:45:43.881403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.884717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.885424 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.887360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.891412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.891481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-19 21:45:43.891491 | orchestrator | 2025-05-19 21:45:43.892390 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-19 21:45:43.892753 | orchestrator | Monday 19 May 2025 21:45:43 +0000 (0:00:04.435) 0:00:35.075 ************ 2025-05-19 21:45:45.066373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:45:45.066559 | orchestrator | 2025-05-19 21:45:45.067656 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-19 21:45:45.071341 | orchestrator | Monday 19 May 2025 21:45:45 +0000 (0:00:01.196) 0:00:36.272 ************ 2025-05-19 21:45:45.514057 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:46.029587 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:45:46.029865 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:45:46.033400 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:45:46.033412 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:45:46.033416 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:45:46.034286 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:45:46.035428 | orchestrator | 2025-05-19 21:45:46.036545 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-19 21:45:46.037496 | orchestrator | Monday 19 May 2025 21:45:46 +0000 (0:00:00.966) 0:00:37.238 ************ 2025-05-19 21:45:46.119328 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.119421 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.120239 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:46.210824 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:46.211774 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.212951 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.216326 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:46.216366 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:46.299986 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:45:46.300661 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.303184 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.303210 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:46.303432 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:46.591372 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:46.591617 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.592529 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.593380 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:46.594122 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:46.714596 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:46.714778 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.716244 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.719519 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:46.719587 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:46.810359 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:46.810641 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:46.812670 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:46.814697 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:48.056978 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:48.060331 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:48.062110 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:48.062902 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-19 21:45:48.064495 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-19 21:45:48.065498 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-19 21:45:48.066246 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-19 21:45:48.067127 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:48.068163 | orchestrator | 2025-05-19 21:45:48.068701 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-19 21:45:48.069333 | orchestrator | Monday 19 May 2025 21:45:48 +0000 (0:00:02.023) 0:00:39.262 ************ 2025-05-19 21:45:48.216950 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:45:48.298889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:48.379950 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:48.457650 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:48.542570 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:48.651705 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:48.653417 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:48.653854 | orchestrator | 2025-05-19 21:45:48.655363 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-19 21:45:48.655865 | orchestrator | Monday 19 May 2025 21:45:48 +0000 (0:00:00.599) 0:00:39.861 ************ 2025-05-19 21:45:48.961301 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:45:49.042266 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:45:49.132640 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:45:49.205057 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:45:49.282671 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:45:49.318848 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:45:49.319748 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:45:49.320198 | orchestrator | 2025-05-19 21:45:49.321109 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:45:49.321173 | orchestrator | 2025-05-19 21:45:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:45:49.321508 | orchestrator | 2025-05-19 21:45:49 | INFO  | Please wait and do not abort execution. 2025-05-19 21:45:49.322490 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:45:49.322847 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.323555 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.324132 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.324563 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.325506 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.325719 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 21:45:49.326190 | orchestrator | 2025-05-19 21:45:49.326953 | orchestrator | 2025-05-19 21:45:49.327389 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:45:49.327717 | orchestrator | Monday 19 May 2025 21:45:49 +0000 (0:00:00.668) 0:00:40.530 ************ 2025-05-19 21:45:49.328644 | orchestrator | =============================================================================== 2025-05-19 21:45:49.328725 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.44s 2025-05-19 21:45:49.330083 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.14s 2025-05-19 21:45:49.330944 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.43s 2025-05-19 21:45:49.330967 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.10s 2025-05-19 21:45:49.331270 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.19s 2025-05-19 21:45:49.332222 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-05-19 21:45:49.332332 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.84s 2025-05-19 21:45:49.332416 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.61s 2025-05-19 21:45:49.332849 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.60s 2025-05-19 21:45:49.333243 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.50s 2025-05-19 21:45:49.333413 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.44s 2025-05-19 21:45:49.333726 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.25s 2025-05-19 21:45:49.334106 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2025-05-19 21:45:49.334310 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.20s 2025-05-19 21:45:49.334630 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2025-05-19 21:45:49.334848 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.97s 2025-05-19 21:45:49.335223 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-19 21:45:49.335590 | orchestrator | osism.commons.network : Create required directories --------------------- 0.88s 2025-05-19 21:45:49.335774 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-05-19 21:45:49.336162 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.85s 2025-05-19 21:45:49.929170 | orchestrator | + osism apply wireguard 2025-05-19 21:45:51.633246 | orchestrator | 2025-05-19 21:45:51 | INFO  | Task 572db4f4-8c9b-448b-9745-e9c9e5db86a0 (wireguard) was prepared for execution. 2025-05-19 21:45:51.633358 | orchestrator | 2025-05-19 21:45:51 | INFO  | It takes a moment until task 572db4f4-8c9b-448b-9745-e9c9e5db86a0 (wireguard) has been started and output is visible here. 2025-05-19 21:45:55.568903 | orchestrator | 2025-05-19 21:45:55.569295 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-19 21:45:55.570230 | orchestrator | 2025-05-19 21:45:55.571025 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-19 21:45:55.573672 | orchestrator | Monday 19 May 2025 21:45:55 +0000 (0:00:00.219) 0:00:00.219 ************ 2025-05-19 21:45:57.103038 | orchestrator | ok: [testbed-manager] 2025-05-19 21:45:57.103932 | orchestrator | 2025-05-19 21:45:57.105141 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-19 21:45:57.105856 | orchestrator | Monday 19 May 2025 21:45:57 +0000 (0:00:01.535) 0:00:01.755 ************ 2025-05-19 21:46:03.262761 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:03.263867 | orchestrator | 2025-05-19 21:46:03.264547 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-19 21:46:03.265893 | orchestrator | Monday 19 May 2025 21:46:03 +0000 (0:00:06.160) 0:00:07.915 ************ 2025-05-19 21:46:03.801986 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:03.802705 | orchestrator | 2025-05-19 21:46:03.803880 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-19 21:46:03.805097 | orchestrator | Monday 19 May 2025 21:46:03 +0000 (0:00:00.539) 0:00:08.455 ************ 2025-05-19 21:46:04.253781 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:04.254469 | orchestrator | 2025-05-19 21:46:04.254651 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-19 21:46:04.254998 | orchestrator | Monday 19 May 2025 21:46:04 +0000 (0:00:00.451) 0:00:08.907 ************ 2025-05-19 21:46:04.871911 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:04.872018 | orchestrator | 2025-05-19 21:46:04.872033 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-19 21:46:04.877263 | orchestrator | Monday 19 May 2025 21:46:04 +0000 (0:00:00.616) 0:00:09.523 ************ 2025-05-19 21:46:05.259742 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:05.260469 | orchestrator | 2025-05-19 21:46:05.260803 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-19 21:46:05.261281 | orchestrator | Monday 19 May 2025 21:46:05 +0000 (0:00:00.388) 0:00:09.911 ************ 2025-05-19 21:46:05.664852 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:05.665638 | orchestrator | 2025-05-19 21:46:05.667722 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-19 21:46:05.668146 | orchestrator | Monday 19 May 2025 21:46:05 +0000 (0:00:00.404) 0:00:10.316 ************ 2025-05-19 21:46:06.786123 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:06.787151 | orchestrator | 2025-05-19 21:46:06.787895 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-19 21:46:06.788318 | orchestrator | Monday 19 May 2025 21:46:06 +0000 (0:00:01.121) 0:00:11.438 ************ 2025-05-19 21:46:07.652168 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-19 21:46:07.653044 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:07.653214 | orchestrator | 2025-05-19 21:46:07.655632 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-19 21:46:07.656189 | orchestrator | Monday 19 May 2025 21:46:07 +0000 (0:00:00.865) 0:00:12.304 ************ 2025-05-19 21:46:09.306340 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:09.306501 | orchestrator | 2025-05-19 21:46:09.307261 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-19 21:46:09.308646 | orchestrator | Monday 19 May 2025 21:46:09 +0000 (0:00:01.651) 0:00:13.956 ************ 2025-05-19 21:46:10.181376 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:10.182779 | orchestrator | 2025-05-19 21:46:10.184057 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:46:10.184466 | orchestrator | 2025-05-19 21:46:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:46:10.185304 | orchestrator | 2025-05-19 21:46:10 | INFO  | Please wait and do not abort execution. 2025-05-19 21:46:10.186467 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:46:10.187570 | orchestrator | 2025-05-19 21:46:10.188392 | orchestrator | 2025-05-19 21:46:10.189282 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:46:10.190161 | orchestrator | Monday 19 May 2025 21:46:10 +0000 (0:00:00.878) 0:00:14.834 ************ 2025-05-19 21:46:10.190516 | orchestrator | =============================================================================== 2025-05-19 21:46:10.191225 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.16s 2025-05-19 21:46:10.191614 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.65s 2025-05-19 21:46:10.192344 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.54s 2025-05-19 21:46:10.193323 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.12s 2025-05-19 21:46:10.193410 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2025-05-19 21:46:10.193984 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.87s 2025-05-19 21:46:10.194432 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.62s 2025-05-19 21:46:10.195055 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-19 21:46:10.195456 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2025-05-19 21:46:10.196039 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-05-19 21:46:10.196604 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.39s 2025-05-19 21:46:10.704207 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-19 21:46:10.739936 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-19 21:46:10.740038 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-19 21:46:10.812787 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 205 2025-05-19 21:46:10.829677 | orchestrator | + osism apply --environment custom workarounds 2025-05-19 21:46:12.478526 | orchestrator | 2025-05-19 21:46:12 | INFO  | Trying to run play workarounds in environment custom 2025-05-19 21:46:12.544226 | orchestrator | 2025-05-19 21:46:12 | INFO  | Task 57c40e99-06f1-4893-af1a-ed6bdcaaa617 (workarounds) was prepared for execution. 2025-05-19 21:46:12.544345 | orchestrator | 2025-05-19 21:46:12 | INFO  | It takes a moment until task 57c40e99-06f1-4893-af1a-ed6bdcaaa617 (workarounds) has been started and output is visible here. 2025-05-19 21:46:16.275194 | orchestrator | 2025-05-19 21:46:16.277468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:46:16.277528 | orchestrator | 2025-05-19 21:46:16.278094 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-19 21:46:16.278981 | orchestrator | Monday 19 May 2025 21:46:16 +0000 (0:00:00.111) 0:00:00.111 ************ 2025-05-19 21:46:16.394676 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-19 21:46:16.455602 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-19 21:46:16.517299 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-19 21:46:16.577717 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-19 21:46:16.705161 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-19 21:46:16.837222 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-19 21:46:16.837666 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-19 21:46:16.839172 | orchestrator | 2025-05-19 21:46:16.841691 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-19 21:46:16.842284 | orchestrator | 2025-05-19 21:46:16.843086 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 21:46:16.843579 | orchestrator | Monday 19 May 2025 21:46:16 +0000 (0:00:00.567) 0:00:00.679 ************ 2025-05-19 21:46:19.175702 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:19.175797 | orchestrator | 2025-05-19 21:46:19.175813 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-19 21:46:19.175883 | orchestrator | 2025-05-19 21:46:19.176346 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-19 21:46:19.178634 | orchestrator | Monday 19 May 2025 21:46:19 +0000 (0:00:02.335) 0:00:03.014 ************ 2025-05-19 21:46:20.928013 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:46:20.928912 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:46:20.930252 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:46:20.931429 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:46:20.932596 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:46:20.934516 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:46:20.934568 | orchestrator | 2025-05-19 21:46:20.935779 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-19 21:46:20.936308 | orchestrator | 2025-05-19 21:46:20.937072 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-19 21:46:20.937513 | orchestrator | Monday 19 May 2025 21:46:20 +0000 (0:00:01.751) 0:00:04.766 ************ 2025-05-19 21:46:22.284496 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.286396 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.286429 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.288275 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.288974 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.290058 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-19 21:46:22.290703 | orchestrator | 2025-05-19 21:46:22.291600 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-19 21:46:22.292538 | orchestrator | Monday 19 May 2025 21:46:22 +0000 (0:00:01.356) 0:00:06.122 ************ 2025-05-19 21:46:25.890129 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:46:25.890244 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:46:25.890683 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:46:25.892289 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:46:25.893618 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:46:25.893803 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:46:25.895260 | orchestrator | 2025-05-19 21:46:25.896210 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-19 21:46:25.896909 | orchestrator | Monday 19 May 2025 21:46:25 +0000 (0:00:03.604) 0:00:09.727 ************ 2025-05-19 21:46:26.039301 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:46:26.112885 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:46:26.188216 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:46:26.262288 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:46:26.560428 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:46:26.560572 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:46:26.561117 | orchestrator | 2025-05-19 21:46:26.561993 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-19 21:46:26.562930 | orchestrator | 2025-05-19 21:46:26.563538 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-19 21:46:26.563901 | orchestrator | Monday 19 May 2025 21:46:26 +0000 (0:00:00.672) 0:00:10.399 ************ 2025-05-19 21:46:28.406823 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:28.407609 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:46:28.408870 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:46:28.410952 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:46:28.411957 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:46:28.412653 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:46:28.413486 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:46:28.414564 | orchestrator | 2025-05-19 21:46:28.414638 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-19 21:46:28.415112 | orchestrator | Monday 19 May 2025 21:46:28 +0000 (0:00:01.845) 0:00:12.245 ************ 2025-05-19 21:46:29.993008 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:29.993142 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:46:29.993158 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:46:29.993170 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:46:29.993654 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:46:29.994545 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:46:29.995218 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:46:29.996166 | orchestrator | 2025-05-19 21:46:29.996905 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-19 21:46:29.997371 | orchestrator | Monday 19 May 2025 21:46:29 +0000 (0:00:01.580) 0:00:13.825 ************ 2025-05-19 21:46:31.481025 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:31.481412 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:46:31.482595 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:46:31.484534 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:46:31.485599 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:46:31.487734 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:46:31.488335 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:46:31.489037 | orchestrator | 2025-05-19 21:46:31.489732 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-19 21:46:31.490487 | orchestrator | Monday 19 May 2025 21:46:31 +0000 (0:00:01.494) 0:00:15.319 ************ 2025-05-19 21:46:33.216251 | orchestrator | changed: [testbed-manager] 2025-05-19 21:46:33.218618 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:46:33.218741 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:46:33.223791 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:46:33.224537 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:46:33.225622 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:46:33.226302 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:46:33.226926 | orchestrator | 2025-05-19 21:46:33.227637 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-19 21:46:33.229659 | orchestrator | Monday 19 May 2025 21:46:33 +0000 (0:00:01.732) 0:00:17.052 ************ 2025-05-19 21:46:33.371741 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:46:33.442583 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:46:33.517703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:46:33.595091 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:46:33.669780 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:46:33.784898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:46:33.784984 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:46:33.785756 | orchestrator | 2025-05-19 21:46:33.789694 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-19 21:46:33.789745 | orchestrator | 2025-05-19 21:46:33.789765 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-19 21:46:33.789783 | orchestrator | Monday 19 May 2025 21:46:33 +0000 (0:00:00.572) 0:00:17.624 ************ 2025-05-19 21:46:36.527243 | orchestrator | ok: [testbed-manager] 2025-05-19 21:46:36.529175 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:46:36.530127 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:46:36.532564 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:46:36.533205 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:46:36.536869 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:46:36.537194 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:46:36.540536 | orchestrator | 2025-05-19 21:46:36.541136 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:46:36.542086 | orchestrator | 2025-05-19 21:46:36 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:46:36.542278 | orchestrator | 2025-05-19 21:46:36 | INFO  | Please wait and do not abort execution. 2025-05-19 21:46:36.543193 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:46:36.543814 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.545020 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.545546 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.546669 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.547199 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.547647 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:36.548130 | orchestrator | 2025-05-19 21:46:36.548631 | orchestrator | 2025-05-19 21:46:36.548927 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:46:36.549470 | orchestrator | Monday 19 May 2025 21:46:36 +0000 (0:00:02.741) 0:00:20.366 ************ 2025-05-19 21:46:36.551978 | orchestrator | =============================================================================== 2025-05-19 21:46:36.553528 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.60s 2025-05-19 21:46:36.554678 | orchestrator | Install python3-docker -------------------------------------------------- 2.74s 2025-05-19 21:46:36.555510 | orchestrator | Apply netplan configuration --------------------------------------------- 2.34s 2025-05-19 21:46:36.556301 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.85s 2025-05-19 21:46:36.556898 | orchestrator | Apply netplan configuration --------------------------------------------- 1.75s 2025-05-19 21:46:36.558716 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.73s 2025-05-19 21:46:36.559565 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-05-19 21:46:36.560453 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-05-19 21:46:36.561055 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.36s 2025-05-19 21:46:36.561862 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-05-19 21:46:36.562244 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.57s 2025-05-19 21:46:36.562640 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.57s 2025-05-19 21:46:37.102290 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-19 21:46:38.789752 | orchestrator | 2025-05-19 21:46:38 | INFO  | Task c7d091cd-bcc6-4b96-a6fb-834fa1706d6e (reboot) was prepared for execution. 2025-05-19 21:46:38.789857 | orchestrator | 2025-05-19 21:46:38 | INFO  | It takes a moment until task c7d091cd-bcc6-4b96-a6fb-834fa1706d6e (reboot) has been started and output is visible here. 2025-05-19 21:46:42.425637 | orchestrator | 2025-05-19 21:46:42.425736 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:42.425754 | orchestrator | 2025-05-19 21:46:42.425836 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:42.426083 | orchestrator | Monday 19 May 2025 21:46:42 +0000 (0:00:00.152) 0:00:00.152 ************ 2025-05-19 21:46:42.526631 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:46:42.527159 | orchestrator | 2025-05-19 21:46:42.528059 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:42.528673 | orchestrator | Monday 19 May 2025 21:46:42 +0000 (0:00:00.102) 0:00:00.254 ************ 2025-05-19 21:46:43.389036 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:46:43.389729 | orchestrator | 2025-05-19 21:46:43.390178 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:43.390796 | orchestrator | Monday 19 May 2025 21:46:43 +0000 (0:00:00.862) 0:00:01.116 ************ 2025-05-19 21:46:43.481905 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:46:43.482869 | orchestrator | 2025-05-19 21:46:43.482904 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:43.483708 | orchestrator | 2025-05-19 21:46:43.484903 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:43.485682 | orchestrator | Monday 19 May 2025 21:46:43 +0000 (0:00:00.091) 0:00:01.207 ************ 2025-05-19 21:46:43.565707 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:46:43.565771 | orchestrator | 2025-05-19 21:46:43.566294 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:43.566746 | orchestrator | Monday 19 May 2025 21:46:43 +0000 (0:00:00.084) 0:00:01.292 ************ 2025-05-19 21:46:44.202386 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:46:44.202636 | orchestrator | 2025-05-19 21:46:44.203855 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:44.204164 | orchestrator | Monday 19 May 2025 21:46:44 +0000 (0:00:00.636) 0:00:01.928 ************ 2025-05-19 21:46:44.316201 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:46:44.317558 | orchestrator | 2025-05-19 21:46:44.318262 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:44.319091 | orchestrator | 2025-05-19 21:46:44.320885 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:44.320914 | orchestrator | Monday 19 May 2025 21:46:44 +0000 (0:00:00.114) 0:00:02.043 ************ 2025-05-19 21:46:44.461652 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:46:44.462761 | orchestrator | 2025-05-19 21:46:44.463355 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:44.464281 | orchestrator | Monday 19 May 2025 21:46:44 +0000 (0:00:00.145) 0:00:02.188 ************ 2025-05-19 21:46:45.115333 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:46:45.116187 | orchestrator | 2025-05-19 21:46:45.116656 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:45.117942 | orchestrator | Monday 19 May 2025 21:46:45 +0000 (0:00:00.654) 0:00:02.842 ************ 2025-05-19 21:46:45.221584 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:46:45.222300 | orchestrator | 2025-05-19 21:46:45.223319 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:45.223796 | orchestrator | 2025-05-19 21:46:45.224466 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:45.224868 | orchestrator | Monday 19 May 2025 21:46:45 +0000 (0:00:00.104) 0:00:02.947 ************ 2025-05-19 21:46:45.308396 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:46:45.308544 | orchestrator | 2025-05-19 21:46:45.309216 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:45.309772 | orchestrator | Monday 19 May 2025 21:46:45 +0000 (0:00:00.088) 0:00:03.035 ************ 2025-05-19 21:46:45.970306 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:46:45.970412 | orchestrator | 2025-05-19 21:46:45.970496 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:45.970517 | orchestrator | Monday 19 May 2025 21:46:45 +0000 (0:00:00.660) 0:00:03.696 ************ 2025-05-19 21:46:46.086237 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:46:46.086325 | orchestrator | 2025-05-19 21:46:46.087665 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:46.087689 | orchestrator | 2025-05-19 21:46:46.088279 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:46.089057 | orchestrator | Monday 19 May 2025 21:46:46 +0000 (0:00:00.114) 0:00:03.811 ************ 2025-05-19 21:46:46.169373 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:46:46.169805 | orchestrator | 2025-05-19 21:46:46.170280 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:46.170448 | orchestrator | Monday 19 May 2025 21:46:46 +0000 (0:00:00.083) 0:00:03.895 ************ 2025-05-19 21:46:46.833257 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:46:46.834345 | orchestrator | 2025-05-19 21:46:46.834751 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:46.835725 | orchestrator | Monday 19 May 2025 21:46:46 +0000 (0:00:00.663) 0:00:04.559 ************ 2025-05-19 21:46:46.934314 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:46:46.934990 | orchestrator | 2025-05-19 21:46:46.936008 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-19 21:46:46.936699 | orchestrator | 2025-05-19 21:46:46.937628 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-19 21:46:46.938471 | orchestrator | Monday 19 May 2025 21:46:46 +0000 (0:00:00.099) 0:00:04.659 ************ 2025-05-19 21:46:47.019815 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:46:47.019915 | orchestrator | 2025-05-19 21:46:47.020720 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-19 21:46:47.021054 | orchestrator | Monday 19 May 2025 21:46:47 +0000 (0:00:00.087) 0:00:04.746 ************ 2025-05-19 21:46:47.668881 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:46:47.669053 | orchestrator | 2025-05-19 21:46:47.669909 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-19 21:46:47.670691 | orchestrator | Monday 19 May 2025 21:46:47 +0000 (0:00:00.647) 0:00:05.394 ************ 2025-05-19 21:46:47.704167 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:46:47.704548 | orchestrator | 2025-05-19 21:46:47.705502 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:46:47.705826 | orchestrator | 2025-05-19 21:46:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:46:47.706188 | orchestrator | 2025-05-19 21:46:47 | INFO  | Please wait and do not abort execution. 2025-05-19 21:46:47.708129 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.708961 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.710860 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.711586 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.712642 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.713917 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:46:47.714080 | orchestrator | 2025-05-19 21:46:47.714862 | orchestrator | 2025-05-19 21:46:47.715571 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:46:47.716273 | orchestrator | Monday 19 May 2025 21:46:47 +0000 (0:00:00.037) 0:00:05.431 ************ 2025-05-19 21:46:47.716884 | orchestrator | =============================================================================== 2025-05-19 21:46:47.717315 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.13s 2025-05-19 21:46:47.717811 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.59s 2025-05-19 21:46:47.718253 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2025-05-19 21:46:48.223823 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-19 21:46:49.866533 | orchestrator | 2025-05-19 21:46:49 | INFO  | Task 4522ceb1-9855-4d91-8267-589e9a24a12d (wait-for-connection) was prepared for execution. 2025-05-19 21:46:49.866653 | orchestrator | 2025-05-19 21:46:49 | INFO  | It takes a moment until task 4522ceb1-9855-4d91-8267-589e9a24a12d (wait-for-connection) has been started and output is visible here. 2025-05-19 21:46:53.859005 | orchestrator | 2025-05-19 21:46:53.859122 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-19 21:46:53.859141 | orchestrator | 2025-05-19 21:46:53.861104 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-19 21:46:53.861867 | orchestrator | Monday 19 May 2025 21:46:53 +0000 (0:00:00.233) 0:00:00.233 ************ 2025-05-19 21:47:05.569860 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:47:05.569996 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:47:05.570101 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:47:05.571363 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:47:05.572996 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:47:05.573822 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:47:05.574635 | orchestrator | 2025-05-19 21:47:05.575357 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:47:05.575632 | orchestrator | 2025-05-19 21:47:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:47:05.575848 | orchestrator | 2025-05-19 21:47:05 | INFO  | Please wait and do not abort execution. 2025-05-19 21:47:05.576648 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.579704 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.580244 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.580889 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.581111 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.581510 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:05.582077 | orchestrator | 2025-05-19 21:47:05.582710 | orchestrator | 2025-05-19 21:47:05.583220 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:47:05.588137 | orchestrator | Monday 19 May 2025 21:47:05 +0000 (0:00:11.712) 0:00:11.946 ************ 2025-05-19 21:47:05.589830 | orchestrator | =============================================================================== 2025-05-19 21:47:05.590551 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.71s 2025-05-19 21:47:06.124777 | orchestrator | + osism apply hddtemp 2025-05-19 21:47:07.811887 | orchestrator | 2025-05-19 21:47:07 | INFO  | Task a0f96cff-2a57-4491-99bf-f96693e2e5bd (hddtemp) was prepared for execution. 2025-05-19 21:47:07.811988 | orchestrator | 2025-05-19 21:47:07 | INFO  | It takes a moment until task a0f96cff-2a57-4491-99bf-f96693e2e5bd (hddtemp) has been started and output is visible here. 2025-05-19 21:47:11.648484 | orchestrator | 2025-05-19 21:47:11.650367 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-19 21:47:11.650695 | orchestrator | 2025-05-19 21:47:11.652599 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-19 21:47:11.653451 | orchestrator | Monday 19 May 2025 21:47:11 +0000 (0:00:00.217) 0:00:00.217 ************ 2025-05-19 21:47:11.779826 | orchestrator | ok: [testbed-manager] 2025-05-19 21:47:11.848662 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:47:11.911719 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:47:11.976595 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:47:12.110905 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:47:12.223730 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:47:12.224134 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:47:12.224760 | orchestrator | 2025-05-19 21:47:12.225315 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-19 21:47:12.226083 | orchestrator | Monday 19 May 2025 21:47:12 +0000 (0:00:00.574) 0:00:00.792 ************ 2025-05-19 21:47:13.240212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:47:13.241483 | orchestrator | 2025-05-19 21:47:13.242114 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-19 21:47:13.243172 | orchestrator | Monday 19 May 2025 21:47:13 +0000 (0:00:01.015) 0:00:01.808 ************ 2025-05-19 21:47:15.140177 | orchestrator | ok: [testbed-manager] 2025-05-19 21:47:15.140594 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:47:15.141954 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:47:15.143288 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:47:15.144587 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:47:15.145207 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:47:15.146650 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:47:15.147672 | orchestrator | 2025-05-19 21:47:15.148843 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-19 21:47:15.149635 | orchestrator | Monday 19 May 2025 21:47:15 +0000 (0:00:01.901) 0:00:03.709 ************ 2025-05-19 21:47:15.743875 | orchestrator | changed: [testbed-manager] 2025-05-19 21:47:15.823046 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:47:16.266908 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:47:16.267461 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:47:16.271479 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:47:16.272801 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:47:16.274685 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:47:16.274978 | orchestrator | 2025-05-19 21:47:16.276481 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-19 21:47:16.277317 | orchestrator | Monday 19 May 2025 21:47:16 +0000 (0:00:01.122) 0:00:04.832 ************ 2025-05-19 21:47:17.349310 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:47:17.349478 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:47:17.349923 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:47:17.350302 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:47:17.351235 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:47:17.352070 | orchestrator | ok: [testbed-manager] 2025-05-19 21:47:17.354975 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:47:17.355626 | orchestrator | 2025-05-19 21:47:17.355846 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-19 21:47:17.356509 | orchestrator | Monday 19 May 2025 21:47:17 +0000 (0:00:01.082) 0:00:05.915 ************ 2025-05-19 21:47:17.754939 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:47:17.833207 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:47:17.909152 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:47:17.999248 | orchestrator | changed: [testbed-manager] 2025-05-19 21:47:18.116217 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:47:18.116763 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:47:18.117651 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:47:18.118493 | orchestrator | 2025-05-19 21:47:18.119617 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-19 21:47:18.121192 | orchestrator | Monday 19 May 2025 21:47:18 +0000 (0:00:00.771) 0:00:06.686 ************ 2025-05-19 21:47:31.123885 | orchestrator | changed: [testbed-manager] 2025-05-19 21:47:31.125571 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:47:31.125631 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:47:31.125652 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:47:31.125934 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:47:31.127781 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:47:31.128897 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:47:31.129626 | orchestrator | 2025-05-19 21:47:31.130529 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-19 21:47:31.131124 | orchestrator | Monday 19 May 2025 21:47:31 +0000 (0:00:13.000) 0:00:19.687 ************ 2025-05-19 21:47:32.326713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:47:32.327020 | orchestrator | 2025-05-19 21:47:32.327965 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-19 21:47:32.328567 | orchestrator | Monday 19 May 2025 21:47:32 +0000 (0:00:01.205) 0:00:20.893 ************ 2025-05-19 21:47:34.168726 | orchestrator | changed: [testbed-manager] 2025-05-19 21:47:34.168841 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:47:34.169557 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:47:34.170148 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:47:34.171223 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:47:34.171781 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:47:34.172588 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:47:34.173448 | orchestrator | 2025-05-19 21:47:34.175843 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:47:34.175873 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:47:34.175907 | orchestrator | 2025-05-19 21:47:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:47:34.175922 | orchestrator | 2025-05-19 21:47:34 | INFO  | Please wait and do not abort execution. 2025-05-19 21:47:34.176394 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.176978 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.177678 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.178447 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.179025 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.179780 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:34.180303 | orchestrator | 2025-05-19 21:47:34.181058 | orchestrator | 2025-05-19 21:47:34.183279 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:47:34.183451 | orchestrator | Monday 19 May 2025 21:47:34 +0000 (0:00:01.844) 0:00:22.737 ************ 2025-05-19 21:47:34.194216 | orchestrator | =============================================================================== 2025-05-19 21:47:34.194260 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.00s 2025-05-19 21:47:34.194333 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.90s 2025-05-19 21:47:34.195193 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-05-19 21:47:34.196019 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2025-05-19 21:47:34.196639 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-05-19 21:47:34.197316 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.08s 2025-05-19 21:47:34.197786 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2025-05-19 21:47:34.198335 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.77s 2025-05-19 21:47:34.199064 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.57s 2025-05-19 21:47:34.890857 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-19 21:47:36.320827 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 21:47:36.320935 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-19 21:47:36.320951 | orchestrator | + local max_attempts=60 2025-05-19 21:47:36.320964 | orchestrator | + local name=ceph-ansible 2025-05-19 21:47:36.320976 | orchestrator | + local attempt_num=1 2025-05-19 21:47:36.321055 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-19 21:47:36.358619 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:47:36.358709 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-19 21:47:36.358723 | orchestrator | + local max_attempts=60 2025-05-19 21:47:36.358737 | orchestrator | + local name=kolla-ansible 2025-05-19 21:47:36.358748 | orchestrator | + local attempt_num=1 2025-05-19 21:47:36.358824 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-19 21:47:36.390727 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:47:36.390812 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-19 21:47:36.390825 | orchestrator | + local max_attempts=60 2025-05-19 21:47:36.390892 | orchestrator | + local name=osism-ansible 2025-05-19 21:47:36.390906 | orchestrator | + local attempt_num=1 2025-05-19 21:47:36.391514 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-19 21:47:36.424367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-19 21:47:36.424516 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-19 21:47:36.424540 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-19 21:47:36.579707 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-19 21:47:36.723822 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-19 21:47:36.869209 | orchestrator | ARA in osism-ansible already disabled. 2025-05-19 21:47:37.013391 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-19 21:47:37.013692 | orchestrator | + osism apply gather-facts 2025-05-19 21:47:38.713846 | orchestrator | 2025-05-19 21:47:38 | INFO  | Task 687bb890-ae49-435b-8d51-f12aa9b6fbbf (gather-facts) was prepared for execution. 2025-05-19 21:47:38.713951 | orchestrator | 2025-05-19 21:47:38 | INFO  | It takes a moment until task 687bb890-ae49-435b-8d51-f12aa9b6fbbf (gather-facts) has been started and output is visible here. 2025-05-19 21:47:42.654912 | orchestrator | 2025-05-19 21:47:42.656538 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 21:47:42.657455 | orchestrator | 2025-05-19 21:47:42.658545 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:47:42.660216 | orchestrator | Monday 19 May 2025 21:47:42 +0000 (0:00:00.192) 0:00:00.192 ************ 2025-05-19 21:47:47.639984 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:47:47.640100 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:47:47.640394 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:47:47.641856 | orchestrator | ok: [testbed-manager] 2025-05-19 21:47:47.644246 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:47:47.644286 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:47:47.645042 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:47:47.645986 | orchestrator | 2025-05-19 21:47:47.647658 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 21:47:47.648104 | orchestrator | 2025-05-19 21:47:47.648919 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 21:47:47.649670 | orchestrator | Monday 19 May 2025 21:47:47 +0000 (0:00:04.986) 0:00:05.178 ************ 2025-05-19 21:47:47.797653 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:47:47.871474 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:47:47.944858 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:47:48.016477 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:47:48.089675 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:47:48.120853 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:47:48.121259 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:47:48.122080 | orchestrator | 2025-05-19 21:47:48.123928 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:47:48.124983 | orchestrator | 2025-05-19 21:47:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:47:48.125022 | orchestrator | 2025-05-19 21:47:48 | INFO  | Please wait and do not abort execution. 2025-05-19 21:47:48.125272 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.125876 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.126750 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.126855 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.127587 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.127895 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.128190 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 21:47:48.128626 | orchestrator | 2025-05-19 21:47:48.128976 | orchestrator | 2025-05-19 21:47:48.129593 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:47:48.129922 | orchestrator | Monday 19 May 2025 21:47:48 +0000 (0:00:00.483) 0:00:05.661 ************ 2025-05-19 21:47:48.130271 | orchestrator | =============================================================================== 2025-05-19 21:47:48.130623 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.99s 2025-05-19 21:47:48.130954 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-05-19 21:47:48.685325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-19 21:47:48.698097 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-19 21:47:48.708219 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-19 21:47:48.727721 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-19 21:47:48.742825 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-19 21:47:48.762397 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-19 21:47:48.781139 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-19 21:47:48.799531 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-19 21:47:48.813514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-19 21:47:48.833720 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-19 21:47:48.853592 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-19 21:47:48.871000 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-19 21:47:48.889375 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-19 21:47:48.908263 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-19 21:47:48.925137 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-19 21:47:48.939629 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-19 21:47:48.950971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-19 21:47:48.961760 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-19 21:47:48.972371 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-19 21:47:48.982863 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-19 21:47:48.993310 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-19 21:47:49.095411 | orchestrator | ok: Runtime: 0:24:31.724010 2025-05-19 21:47:49.203144 | 2025-05-19 21:47:49.203296 | TASK [Deploy services] 2025-05-19 21:47:49.737742 | orchestrator | skipping: Conditional result was False 2025-05-19 21:47:49.758107 | 2025-05-19 21:47:49.758319 | TASK [Deploy in a nutshell] 2025-05-19 21:47:50.464632 | orchestrator | + set -e 2025-05-19 21:47:50.464784 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 21:47:50.464796 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 21:47:50.464805 | orchestrator | ++ INTERACTIVE=false 2025-05-19 21:47:50.464810 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 21:47:50.464815 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 21:47:50.464830 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 21:47:50.464853 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 21:47:50.464864 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 21:47:50.464870 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 21:47:50.464876 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 21:47:50.464880 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 21:47:50.464887 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 21:47:50.464891 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 21:47:50.464899 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 21:47:50.464903 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 21:47:50.464910 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 21:47:50.464914 | orchestrator | ++ export ARA=false 2025-05-19 21:47:50.464918 | orchestrator | ++ ARA=false 2025-05-19 21:47:50.464921 | orchestrator | ++ export TEMPEST=false 2025-05-19 21:47:50.464926 | orchestrator | ++ TEMPEST=false 2025-05-19 21:47:50.464931 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 21:47:50.464951 | orchestrator | ++ IS_ZUUL=true 2025-05-19 21:47:50.464959 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:47:50.464965 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 21:47:50.464971 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 21:47:50.464977 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 21:47:50.464984 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 21:47:50.464990 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 21:47:50.464997 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 21:47:50.465007 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 21:47:50.465011 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 21:47:50.466084 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 21:47:50.466099 | orchestrator | + echo 2025-05-19 21:47:50.466117 | orchestrator | 2025-05-19 21:47:50.466123 | orchestrator | # PULL IMAGES 2025-05-19 21:47:50.466127 | orchestrator | 2025-05-19 21:47:50.466131 | orchestrator | + echo '# PULL IMAGES' 2025-05-19 21:47:50.466141 | orchestrator | + echo 2025-05-19 21:47:50.467533 | orchestrator | ++ semver latest 7.0.0 2025-05-19 21:47:50.525372 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-19 21:47:50.525458 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 21:47:50.525465 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-19 21:47:52.148455 | orchestrator | 2025-05-19 21:47:52 | INFO  | Trying to run play pull-images in environment custom 2025-05-19 21:47:52.206074 | orchestrator | 2025-05-19 21:47:52 | INFO  | Task 7afba3a4-c13a-46aa-8a54-77481e4ce298 (pull-images) was prepared for execution. 2025-05-19 21:47:52.206155 | orchestrator | 2025-05-19 21:47:52 | INFO  | It takes a moment until task 7afba3a4-c13a-46aa-8a54-77481e4ce298 (pull-images) has been started and output is visible here. 2025-05-19 21:47:56.084087 | orchestrator | 2025-05-19 21:47:56.085583 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-19 21:47:56.087138 | orchestrator | 2025-05-19 21:47:56.088132 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-19 21:47:56.088774 | orchestrator | Monday 19 May 2025 21:47:56 +0000 (0:00:00.145) 0:00:00.145 ************ 2025-05-19 21:49:01.187953 | orchestrator | changed: [testbed-manager] 2025-05-19 21:49:01.188116 | orchestrator | 2025-05-19 21:49:01.188137 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-19 21:49:01.188151 | orchestrator | Monday 19 May 2025 21:49:01 +0000 (0:01:05.101) 0:01:05.247 ************ 2025-05-19 21:49:51.958301 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-19 21:49:51.958463 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-19 21:49:51.958483 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-19 21:49:51.963013 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-19 21:49:51.963072 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-19 21:49:51.963091 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-19 21:49:51.963209 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-19 21:49:51.964035 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-19 21:49:51.964955 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-19 21:49:51.965816 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-19 21:49:51.966406 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-19 21:49:51.967120 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-19 21:49:51.968100 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-19 21:49:51.968753 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-19 21:49:51.970816 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-19 21:49:51.970846 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-19 21:49:51.971047 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-19 21:49:51.971528 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-19 21:49:51.972182 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-19 21:49:51.972745 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-19 21:49:51.973015 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-19 21:49:51.973622 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-19 21:49:51.974110 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-19 21:49:51.974504 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-19 21:49:51.975045 | orchestrator | 2025-05-19 21:49:51.975590 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:49:51.976337 | orchestrator | 2025-05-19 21:49:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:49:51.976388 | orchestrator | 2025-05-19 21:49:51 | INFO  | Please wait and do not abort execution. 2025-05-19 21:49:51.976764 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:49:51.977128 | orchestrator | 2025-05-19 21:49:51.977598 | orchestrator | 2025-05-19 21:49:51.978167 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:49:51.978597 | orchestrator | Monday 19 May 2025 21:49:51 +0000 (0:00:50.774) 0:01:56.022 ************ 2025-05-19 21:49:51.979117 | orchestrator | =============================================================================== 2025-05-19 21:49:51.979545 | orchestrator | Pull keystone image ---------------------------------------------------- 65.10s 2025-05-19 21:49:51.979982 | orchestrator | Pull other images ------------------------------------------------------ 50.77s 2025-05-19 21:49:54.236188 | orchestrator | 2025-05-19 21:49:54 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-19 21:49:54.299258 | orchestrator | 2025-05-19 21:49:54 | INFO  | Task a872c3e1-35c3-4315-866c-e1f4e351cdbd (wipe-partitions) was prepared for execution. 2025-05-19 21:49:54.299437 | orchestrator | 2025-05-19 21:49:54 | INFO  | It takes a moment until task a872c3e1-35c3-4315-866c-e1f4e351cdbd (wipe-partitions) has been started and output is visible here. 2025-05-19 21:49:58.078077 | orchestrator | 2025-05-19 21:49:58.078197 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-19 21:49:58.078220 | orchestrator | 2025-05-19 21:49:58.078477 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-19 21:49:58.078832 | orchestrator | Monday 19 May 2025 21:49:58 +0000 (0:00:00.146) 0:00:00.146 ************ 2025-05-19 21:49:58.635683 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:49:58.635782 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:49:58.636935 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:49:58.637799 | orchestrator | 2025-05-19 21:49:58.637821 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-19 21:49:58.638563 | orchestrator | Monday 19 May 2025 21:49:58 +0000 (0:00:00.553) 0:00:00.699 ************ 2025-05-19 21:49:58.797264 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:49:58.893569 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:49:58.893670 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:49:58.893685 | orchestrator | 2025-05-19 21:49:58.893698 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-19 21:49:58.893710 | orchestrator | Monday 19 May 2025 21:49:58 +0000 (0:00:00.255) 0:00:00.955 ************ 2025-05-19 21:49:59.525755 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:49:59.526507 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:49:59.526546 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:49:59.526564 | orchestrator | 2025-05-19 21:49:59.529599 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-19 21:49:59.529646 | orchestrator | Monday 19 May 2025 21:49:59 +0000 (0:00:00.638) 0:00:01.593 ************ 2025-05-19 21:49:59.709925 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:49:59.830157 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:49:59.830665 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:49:59.831763 | orchestrator | 2025-05-19 21:49:59.832455 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-19 21:49:59.835306 | orchestrator | Monday 19 May 2025 21:49:59 +0000 (0:00:00.305) 0:00:01.899 ************ 2025-05-19 21:50:01.003538 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 21:50:01.003624 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 21:50:01.003638 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 21:50:01.003818 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 21:50:01.004163 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 21:50:01.004496 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 21:50:01.007940 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 21:50:01.007978 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 21:50:01.007990 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 21:50:01.008002 | orchestrator | 2025-05-19 21:50:01.008015 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-19 21:50:01.008027 | orchestrator | Monday 19 May 2025 21:50:00 +0000 (0:00:01.172) 0:00:03.071 ************ 2025-05-19 21:50:02.276745 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 21:50:02.277972 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 21:50:02.278005 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 21:50:02.278111 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 21:50:02.278278 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 21:50:02.279782 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 21:50:02.279802 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 21:50:02.280108 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 21:50:02.284004 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 21:50:02.284207 | orchestrator | 2025-05-19 21:50:02.284424 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-19 21:50:02.284712 | orchestrator | Monday 19 May 2025 21:50:02 +0000 (0:00:01.272) 0:00:04.344 ************ 2025-05-19 21:50:04.537453 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-19 21:50:04.538148 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-19 21:50:04.538214 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-19 21:50:04.538838 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-19 21:50:04.539263 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-19 21:50:04.540272 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-19 21:50:04.540312 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-19 21:50:04.541678 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-19 21:50:04.541710 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-19 21:50:04.541722 | orchestrator | 2025-05-19 21:50:04.541756 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-19 21:50:04.541770 | orchestrator | Monday 19 May 2025 21:50:04 +0000 (0:00:02.262) 0:00:06.607 ************ 2025-05-19 21:50:05.084847 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:50:05.085737 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:50:05.087273 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:50:05.089088 | orchestrator | 2025-05-19 21:50:05.089117 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-19 21:50:05.091611 | orchestrator | Monday 19 May 2025 21:50:05 +0000 (0:00:00.545) 0:00:07.152 ************ 2025-05-19 21:50:05.651536 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:50:05.651795 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:50:05.653273 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:50:05.653960 | orchestrator | 2025-05-19 21:50:05.655267 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:50:05.655918 | orchestrator | 2025-05-19 21:50:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:50:05.655992 | orchestrator | 2025-05-19 21:50:05 | INFO  | Please wait and do not abort execution. 2025-05-19 21:50:05.657166 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:05.658947 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:05.659883 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:05.660126 | orchestrator | 2025-05-19 21:50:05.660814 | orchestrator | 2025-05-19 21:50:05.661349 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:50:05.663046 | orchestrator | Monday 19 May 2025 21:50:05 +0000 (0:00:00.567) 0:00:07.719 ************ 2025-05-19 21:50:05.663080 | orchestrator | =============================================================================== 2025-05-19 21:50:05.663122 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.26s 2025-05-19 21:50:05.663346 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.27s 2025-05-19 21:50:05.664007 | orchestrator | Check device availability ----------------------------------------------- 1.17s 2025-05-19 21:50:05.664408 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.64s 2025-05-19 21:50:05.664826 | orchestrator | Request device events from the kernel ----------------------------------- 0.57s 2025-05-19 21:50:05.665431 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-05-19 21:50:05.665697 | orchestrator | Reload udev rules ------------------------------------------------------- 0.55s 2025-05-19 21:50:05.666289 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.31s 2025-05-19 21:50:05.666556 | orchestrator | Remove all rook related logical devices --------------------------------- 0.26s 2025-05-19 21:50:07.623935 | orchestrator | 2025-05-19 21:50:07 | INFO  | Task b1232845-8122-40b0-b1e3-c179f3d3282d (facts) was prepared for execution. 2025-05-19 21:50:07.624039 | orchestrator | 2025-05-19 21:50:07 | INFO  | It takes a moment until task b1232845-8122-40b0-b1e3-c179f3d3282d (facts) has been started and output is visible here. 2025-05-19 21:50:11.673314 | orchestrator | 2025-05-19 21:50:11.677905 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 21:50:11.677976 | orchestrator | 2025-05-19 21:50:11.678898 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 21:50:11.678933 | orchestrator | Monday 19 May 2025 21:50:11 +0000 (0:00:00.256) 0:00:00.256 ************ 2025-05-19 21:50:12.767096 | orchestrator | ok: [testbed-manager] 2025-05-19 21:50:12.767771 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:50:12.769918 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:50:12.773106 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:50:12.773153 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:12.773165 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:12.773176 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:50:12.773889 | orchestrator | 2025-05-19 21:50:12.774477 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 21:50:12.774915 | orchestrator | Monday 19 May 2025 21:50:12 +0000 (0:00:01.092) 0:00:01.349 ************ 2025-05-19 21:50:12.933582 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:50:13.013684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:50:13.094850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:50:13.173947 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:50:13.250882 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:13.975638 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:13.975968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:13.977078 | orchestrator | 2025-05-19 21:50:13.977437 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 21:50:13.978337 | orchestrator | 2025-05-19 21:50:13.979012 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:50:13.983215 | orchestrator | Monday 19 May 2025 21:50:13 +0000 (0:00:01.211) 0:00:02.561 ************ 2025-05-19 21:50:19.070313 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:50:19.071037 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:50:19.072632 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:50:19.075061 | orchestrator | ok: [testbed-manager] 2025-05-19 21:50:19.075145 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:50:19.076022 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:19.076245 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:19.077130 | orchestrator | 2025-05-19 21:50:19.078403 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 21:50:19.078700 | orchestrator | 2025-05-19 21:50:19.079971 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 21:50:19.080765 | orchestrator | Monday 19 May 2025 21:50:19 +0000 (0:00:05.095) 0:00:07.657 ************ 2025-05-19 21:50:19.225241 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:50:19.303724 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:50:19.376579 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:50:19.452340 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:50:19.537306 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:19.573749 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:19.575842 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:19.576243 | orchestrator | 2025-05-19 21:50:19.577173 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:50:19.577225 | orchestrator | 2025-05-19 21:50:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:50:19.577530 | orchestrator | 2025-05-19 21:50:19 | INFO  | Please wait and do not abort execution. 2025-05-19 21:50:19.578073 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.579509 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.579535 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.579547 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.579558 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.579948 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.580218 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:50:19.580567 | orchestrator | 2025-05-19 21:50:19.580980 | orchestrator | 2025-05-19 21:50:19.581372 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:50:19.581808 | orchestrator | Monday 19 May 2025 21:50:19 +0000 (0:00:00.504) 0:00:08.161 ************ 2025-05-19 21:50:19.583146 | orchestrator | =============================================================================== 2025-05-19 21:50:19.583174 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.10s 2025-05-19 21:50:19.583190 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-05-19 21:50:19.583271 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-05-19 21:50:19.583641 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-19 21:50:22.745650 | orchestrator | 2025-05-19 21:50:22 | INFO  | Task 0dc683cc-d915-489d-9aaa-46c07ccb747f (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-19 21:50:22.745764 | orchestrator | 2025-05-19 21:50:22 | INFO  | It takes a moment until task 0dc683cc-d915-489d-9aaa-46c07ccb747f (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-19 21:50:28.423035 | orchestrator | 2025-05-19 21:50:28.423808 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 21:50:28.423854 | orchestrator | 2025-05-19 21:50:28.425000 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:50:28.426438 | orchestrator | Monday 19 May 2025 21:50:28 +0000 (0:00:00.296) 0:00:00.296 ************ 2025-05-19 21:50:28.635751 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 21:50:28.636538 | orchestrator | 2025-05-19 21:50:28.638720 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:50:28.641235 | orchestrator | Monday 19 May 2025 21:50:28 +0000 (0:00:00.215) 0:00:00.512 ************ 2025-05-19 21:50:28.893472 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:28.894884 | orchestrator | 2025-05-19 21:50:28.894916 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:28.895628 | orchestrator | Monday 19 May 2025 21:50:28 +0000 (0:00:00.259) 0:00:00.772 ************ 2025-05-19 21:50:29.213455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 21:50:29.216141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 21:50:29.216188 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 21:50:29.216880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 21:50:29.218117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 21:50:29.220361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 21:50:29.221117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 21:50:29.221895 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 21:50:29.222497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 21:50:29.222974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 21:50:29.223298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 21:50:29.223716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 21:50:29.225251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 21:50:29.227476 | orchestrator | 2025-05-19 21:50:29.227633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:29.227691 | orchestrator | Monday 19 May 2025 21:50:29 +0000 (0:00:00.315) 0:00:01.087 ************ 2025-05-19 21:50:29.609644 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:29.615122 | orchestrator | 2025-05-19 21:50:29.615199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:29.615214 | orchestrator | Monday 19 May 2025 21:50:29 +0000 (0:00:00.403) 0:00:01.491 ************ 2025-05-19 21:50:29.802465 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:29.802554 | orchestrator | 2025-05-19 21:50:29.804136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:29.804273 | orchestrator | Monday 19 May 2025 21:50:29 +0000 (0:00:00.189) 0:00:01.680 ************ 2025-05-19 21:50:29.998227 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.000192 | orchestrator | 2025-05-19 21:50:30.001124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.001393 | orchestrator | Monday 19 May 2025 21:50:29 +0000 (0:00:00.195) 0:00:01.875 ************ 2025-05-19 21:50:30.182915 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.183304 | orchestrator | 2025-05-19 21:50:30.184420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.184446 | orchestrator | Monday 19 May 2025 21:50:30 +0000 (0:00:00.186) 0:00:02.062 ************ 2025-05-19 21:50:30.373411 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.373506 | orchestrator | 2025-05-19 21:50:30.373906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.375976 | orchestrator | Monday 19 May 2025 21:50:30 +0000 (0:00:00.189) 0:00:02.252 ************ 2025-05-19 21:50:30.576118 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.576297 | orchestrator | 2025-05-19 21:50:30.576934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.577882 | orchestrator | Monday 19 May 2025 21:50:30 +0000 (0:00:00.204) 0:00:02.456 ************ 2025-05-19 21:50:30.751423 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.751507 | orchestrator | 2025-05-19 21:50:30.751521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.751536 | orchestrator | Monday 19 May 2025 21:50:30 +0000 (0:00:00.171) 0:00:02.627 ************ 2025-05-19 21:50:30.926206 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:30.926287 | orchestrator | 2025-05-19 21:50:30.926637 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:30.928310 | orchestrator | Monday 19 May 2025 21:50:30 +0000 (0:00:00.178) 0:00:02.806 ************ 2025-05-19 21:50:31.292059 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f) 2025-05-19 21:50:31.294147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f) 2025-05-19 21:50:31.294484 | orchestrator | 2025-05-19 21:50:31.295045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:31.297199 | orchestrator | Monday 19 May 2025 21:50:31 +0000 (0:00:00.367) 0:00:03.173 ************ 2025-05-19 21:50:31.647053 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314) 2025-05-19 21:50:31.649197 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314) 2025-05-19 21:50:31.652955 | orchestrator | 2025-05-19 21:50:31.652991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:31.653004 | orchestrator | Monday 19 May 2025 21:50:31 +0000 (0:00:00.353) 0:00:03.527 ************ 2025-05-19 21:50:32.142463 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d) 2025-05-19 21:50:32.143022 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d) 2025-05-19 21:50:32.144919 | orchestrator | 2025-05-19 21:50:32.145305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:32.145917 | orchestrator | Monday 19 May 2025 21:50:32 +0000 (0:00:00.492) 0:00:04.019 ************ 2025-05-19 21:50:32.639652 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261) 2025-05-19 21:50:32.639800 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261) 2025-05-19 21:50:32.639818 | orchestrator | 2025-05-19 21:50:32.639898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:32.640129 | orchestrator | Monday 19 May 2025 21:50:32 +0000 (0:00:00.497) 0:00:04.517 ************ 2025-05-19 21:50:33.330077 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:50:33.331522 | orchestrator | 2025-05-19 21:50:33.333096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:33.333408 | orchestrator | Monday 19 May 2025 21:50:33 +0000 (0:00:00.689) 0:00:05.206 ************ 2025-05-19 21:50:33.679461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 21:50:33.681496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 21:50:33.684850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 21:50:33.684905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 21:50:33.687099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 21:50:33.688176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 21:50:33.689284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 21:50:33.690941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 21:50:33.691338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 21:50:33.691863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 21:50:33.692160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 21:50:33.692558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 21:50:33.692893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 21:50:33.693236 | orchestrator | 2025-05-19 21:50:33.693573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:33.693844 | orchestrator | Monday 19 May 2025 21:50:33 +0000 (0:00:00.351) 0:00:05.558 ************ 2025-05-19 21:50:33.865542 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:33.869863 | orchestrator | 2025-05-19 21:50:33.869947 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:33.871531 | orchestrator | Monday 19 May 2025 21:50:33 +0000 (0:00:00.185) 0:00:05.744 ************ 2025-05-19 21:50:34.064412 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.067872 | orchestrator | 2025-05-19 21:50:34.067922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.067945 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.200) 0:00:05.944 ************ 2025-05-19 21:50:34.240680 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.242491 | orchestrator | 2025-05-19 21:50:34.243408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.243470 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.175) 0:00:06.120 ************ 2025-05-19 21:50:34.413178 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.413391 | orchestrator | 2025-05-19 21:50:34.413408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.415950 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.168) 0:00:06.289 ************ 2025-05-19 21:50:34.582889 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.588913 | orchestrator | 2025-05-19 21:50:34.589022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.590696 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.175) 0:00:06.464 ************ 2025-05-19 21:50:34.768078 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.768257 | orchestrator | 2025-05-19 21:50:34.769991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.770141 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.184) 0:00:06.648 ************ 2025-05-19 21:50:34.929313 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:34.929584 | orchestrator | 2025-05-19 21:50:34.930145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:34.930519 | orchestrator | Monday 19 May 2025 21:50:34 +0000 (0:00:00.162) 0:00:06.810 ************ 2025-05-19 21:50:35.105590 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:35.105744 | orchestrator | 2025-05-19 21:50:35.105875 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:35.108099 | orchestrator | Monday 19 May 2025 21:50:35 +0000 (0:00:00.171) 0:00:06.982 ************ 2025-05-19 21:50:36.136850 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 21:50:36.138796 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 21:50:36.141549 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 21:50:36.142087 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 21:50:36.143078 | orchestrator | 2025-05-19 21:50:36.143457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:36.143769 | orchestrator | Monday 19 May 2025 21:50:36 +0000 (0:00:01.032) 0:00:08.015 ************ 2025-05-19 21:50:36.325652 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:36.325758 | orchestrator | 2025-05-19 21:50:36.326565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:36.328774 | orchestrator | Monday 19 May 2025 21:50:36 +0000 (0:00:00.183) 0:00:08.198 ************ 2025-05-19 21:50:36.525108 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:36.525475 | orchestrator | 2025-05-19 21:50:36.526988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:36.529600 | orchestrator | Monday 19 May 2025 21:50:36 +0000 (0:00:00.205) 0:00:08.403 ************ 2025-05-19 21:50:36.755422 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:36.755522 | orchestrator | 2025-05-19 21:50:36.756074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:36.758636 | orchestrator | Monday 19 May 2025 21:50:36 +0000 (0:00:00.228) 0:00:08.632 ************ 2025-05-19 21:50:36.978480 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:36.978632 | orchestrator | 2025-05-19 21:50:36.978857 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 21:50:36.979327 | orchestrator | Monday 19 May 2025 21:50:36 +0000 (0:00:00.219) 0:00:08.851 ************ 2025-05-19 21:50:37.123201 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-19 21:50:37.123394 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-19 21:50:37.123978 | orchestrator | 2025-05-19 21:50:37.124178 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 21:50:37.124711 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.150) 0:00:09.001 ************ 2025-05-19 21:50:37.232948 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:37.233035 | orchestrator | 2025-05-19 21:50:37.233107 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 21:50:37.234446 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.112) 0:00:09.113 ************ 2025-05-19 21:50:37.355897 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:37.356468 | orchestrator | 2025-05-19 21:50:37.357520 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 21:50:37.358727 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.122) 0:00:09.236 ************ 2025-05-19 21:50:37.476744 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:37.476852 | orchestrator | 2025-05-19 21:50:37.476950 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 21:50:37.477209 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.119) 0:00:09.355 ************ 2025-05-19 21:50:37.588901 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:37.589052 | orchestrator | 2025-05-19 21:50:37.589424 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 21:50:37.589766 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.113) 0:00:09.469 ************ 2025-05-19 21:50:37.747832 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}}) 2025-05-19 21:50:37.748051 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}}) 2025-05-19 21:50:37.750586 | orchestrator | 2025-05-19 21:50:37.750677 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 21:50:37.750702 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.159) 0:00:09.628 ************ 2025-05-19 21:50:37.904163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}})  2025-05-19 21:50:37.905060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}})  2025-05-19 21:50:37.906185 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:37.909732 | orchestrator | 2025-05-19 21:50:37.909756 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 21:50:37.909770 | orchestrator | Monday 19 May 2025 21:50:37 +0000 (0:00:00.156) 0:00:09.784 ************ 2025-05-19 21:50:38.244822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}})  2025-05-19 21:50:38.246185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}})  2025-05-19 21:50:38.247779 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:38.247802 | orchestrator | 2025-05-19 21:50:38.247812 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 21:50:38.248043 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.342) 0:00:10.126 ************ 2025-05-19 21:50:38.386389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}})  2025-05-19 21:50:38.387807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}})  2025-05-19 21:50:38.388459 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:38.388843 | orchestrator | 2025-05-19 21:50:38.389184 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 21:50:38.389518 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.141) 0:00:10.268 ************ 2025-05-19 21:50:38.515542 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:38.516917 | orchestrator | 2025-05-19 21:50:38.516957 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 21:50:38.519053 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.127) 0:00:10.395 ************ 2025-05-19 21:50:38.634698 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:50:38.635712 | orchestrator | 2025-05-19 21:50:38.636979 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 21:50:38.637006 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.120) 0:00:10.516 ************ 2025-05-19 21:50:38.767329 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:38.767526 | orchestrator | 2025-05-19 21:50:38.767546 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 21:50:38.767560 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.128) 0:00:10.644 ************ 2025-05-19 21:50:38.903469 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:38.903678 | orchestrator | 2025-05-19 21:50:38.904131 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 21:50:38.905825 | orchestrator | Monday 19 May 2025 21:50:38 +0000 (0:00:00.139) 0:00:10.784 ************ 2025-05-19 21:50:39.011335 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:39.011714 | orchestrator | 2025-05-19 21:50:39.012858 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 21:50:39.013297 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.108) 0:00:10.892 ************ 2025-05-19 21:50:39.147084 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:50:39.147963 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:50:39.148052 | orchestrator |  "sdb": { 2025-05-19 21:50:39.148146 | orchestrator |  "osd_lvm_uuid": "52cfe21f-2cf0-5660-8f5b-0412bede7d5f" 2025-05-19 21:50:39.148428 | orchestrator |  }, 2025-05-19 21:50:39.148753 | orchestrator |  "sdc": { 2025-05-19 21:50:39.149430 | orchestrator |  "osd_lvm_uuid": "8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9" 2025-05-19 21:50:39.151094 | orchestrator |  } 2025-05-19 21:50:39.152078 | orchestrator |  } 2025-05-19 21:50:39.152107 | orchestrator | } 2025-05-19 21:50:39.152119 | orchestrator | 2025-05-19 21:50:39.152131 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 21:50:39.152198 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.134) 0:00:11.027 ************ 2025-05-19 21:50:39.266176 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:39.266436 | orchestrator | 2025-05-19 21:50:39.266519 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 21:50:39.266898 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.118) 0:00:11.145 ************ 2025-05-19 21:50:39.373320 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:39.374421 | orchestrator | 2025-05-19 21:50:39.374453 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 21:50:39.374467 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.106) 0:00:11.252 ************ 2025-05-19 21:50:39.472753 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:50:39.472937 | orchestrator | 2025-05-19 21:50:39.474985 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 21:50:39.475174 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.099) 0:00:11.352 ************ 2025-05-19 21:50:39.649410 | orchestrator | changed: [testbed-node-3] => { 2025-05-19 21:50:39.649891 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 21:50:39.650271 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:50:39.650772 | orchestrator |  "sdb": { 2025-05-19 21:50:39.651121 | orchestrator |  "osd_lvm_uuid": "52cfe21f-2cf0-5660-8f5b-0412bede7d5f" 2025-05-19 21:50:39.651504 | orchestrator |  }, 2025-05-19 21:50:39.652033 | orchestrator |  "sdc": { 2025-05-19 21:50:39.652249 | orchestrator |  "osd_lvm_uuid": "8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9" 2025-05-19 21:50:39.652695 | orchestrator |  } 2025-05-19 21:50:39.653021 | orchestrator |  }, 2025-05-19 21:50:39.653309 | orchestrator |  "lvm_volumes": [ 2025-05-19 21:50:39.653852 | orchestrator |  { 2025-05-19 21:50:39.654454 | orchestrator |  "data": "osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f", 2025-05-19 21:50:39.654973 | orchestrator |  "data_vg": "ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f" 2025-05-19 21:50:39.656997 | orchestrator |  }, 2025-05-19 21:50:39.657240 | orchestrator |  { 2025-05-19 21:50:39.657297 | orchestrator |  "data": "osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9", 2025-05-19 21:50:39.657632 | orchestrator |  "data_vg": "ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9" 2025-05-19 21:50:39.657975 | orchestrator |  } 2025-05-19 21:50:39.658225 | orchestrator |  ] 2025-05-19 21:50:39.658539 | orchestrator |  } 2025-05-19 21:50:39.659741 | orchestrator | } 2025-05-19 21:50:39.659769 | orchestrator | 2025-05-19 21:50:39.659781 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 21:50:39.659793 | orchestrator | Monday 19 May 2025 21:50:39 +0000 (0:00:00.177) 0:00:11.529 ************ 2025-05-19 21:50:41.387273 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 21:50:41.387425 | orchestrator | 2025-05-19 21:50:41.388148 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 21:50:41.389124 | orchestrator | 2025-05-19 21:50:41.390434 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:50:41.390764 | orchestrator | Monday 19 May 2025 21:50:41 +0000 (0:00:01.730) 0:00:13.260 ************ 2025-05-19 21:50:41.618803 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 21:50:41.619294 | orchestrator | 2025-05-19 21:50:41.619329 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:50:41.621142 | orchestrator | Monday 19 May 2025 21:50:41 +0000 (0:00:00.237) 0:00:13.497 ************ 2025-05-19 21:50:41.848908 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:41.848994 | orchestrator | 2025-05-19 21:50:41.849010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:41.849023 | orchestrator | Monday 19 May 2025 21:50:41 +0000 (0:00:00.231) 0:00:13.729 ************ 2025-05-19 21:50:42.182270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 21:50:42.184846 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 21:50:42.184883 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 21:50:42.185329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 21:50:42.188314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 21:50:42.188761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 21:50:42.189158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 21:50:42.189962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 21:50:42.190924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 21:50:42.191782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 21:50:42.192753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 21:50:42.193329 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 21:50:42.194068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 21:50:42.194328 | orchestrator | 2025-05-19 21:50:42.196699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:42.196730 | orchestrator | Monday 19 May 2025 21:50:42 +0000 (0:00:00.333) 0:00:14.062 ************ 2025-05-19 21:50:42.358972 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:42.359061 | orchestrator | 2025-05-19 21:50:42.359155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:42.359411 | orchestrator | Monday 19 May 2025 21:50:42 +0000 (0:00:00.174) 0:00:14.237 ************ 2025-05-19 21:50:42.529818 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:42.529925 | orchestrator | 2025-05-19 21:50:42.530089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:42.531690 | orchestrator | Monday 19 May 2025 21:50:42 +0000 (0:00:00.171) 0:00:14.409 ************ 2025-05-19 21:50:42.705384 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:42.709420 | orchestrator | 2025-05-19 21:50:42.709457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:42.709471 | orchestrator | Monday 19 May 2025 21:50:42 +0000 (0:00:00.173) 0:00:14.583 ************ 2025-05-19 21:50:42.873152 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:42.877622 | orchestrator | 2025-05-19 21:50:42.878400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:42.880775 | orchestrator | Monday 19 May 2025 21:50:42 +0000 (0:00:00.167) 0:00:14.751 ************ 2025-05-19 21:50:43.337462 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:43.344015 | orchestrator | 2025-05-19 21:50:43.344048 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:43.344063 | orchestrator | Monday 19 May 2025 21:50:43 +0000 (0:00:00.464) 0:00:15.215 ************ 2025-05-19 21:50:43.511247 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:43.513044 | orchestrator | 2025-05-19 21:50:43.519402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:43.520052 | orchestrator | Monday 19 May 2025 21:50:43 +0000 (0:00:00.176) 0:00:15.392 ************ 2025-05-19 21:50:43.697478 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:43.697566 | orchestrator | 2025-05-19 21:50:43.697704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:43.698167 | orchestrator | Monday 19 May 2025 21:50:43 +0000 (0:00:00.181) 0:00:15.573 ************ 2025-05-19 21:50:43.873733 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:43.873814 | orchestrator | 2025-05-19 21:50:43.873827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:43.873839 | orchestrator | Monday 19 May 2025 21:50:43 +0000 (0:00:00.178) 0:00:15.752 ************ 2025-05-19 21:50:44.260300 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177) 2025-05-19 21:50:44.261697 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177) 2025-05-19 21:50:44.263144 | orchestrator | 2025-05-19 21:50:44.264071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:44.264804 | orchestrator | Monday 19 May 2025 21:50:44 +0000 (0:00:00.386) 0:00:16.139 ************ 2025-05-19 21:50:44.618656 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8) 2025-05-19 21:50:44.625653 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8) 2025-05-19 21:50:44.629126 | orchestrator | 2025-05-19 21:50:44.630929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:44.631615 | orchestrator | Monday 19 May 2025 21:50:44 +0000 (0:00:00.358) 0:00:16.498 ************ 2025-05-19 21:50:44.985668 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305) 2025-05-19 21:50:44.990121 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305) 2025-05-19 21:50:44.990721 | orchestrator | 2025-05-19 21:50:44.991051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:44.991578 | orchestrator | Monday 19 May 2025 21:50:44 +0000 (0:00:00.367) 0:00:16.865 ************ 2025-05-19 21:50:45.363468 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3) 2025-05-19 21:50:45.363679 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3) 2025-05-19 21:50:45.364681 | orchestrator | 2025-05-19 21:50:45.365827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:45.367081 | orchestrator | Monday 19 May 2025 21:50:45 +0000 (0:00:00.378) 0:00:17.243 ************ 2025-05-19 21:50:45.658676 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:50:45.659825 | orchestrator | 2025-05-19 21:50:45.662925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:45.663989 | orchestrator | Monday 19 May 2025 21:50:45 +0000 (0:00:00.296) 0:00:17.540 ************ 2025-05-19 21:50:45.998541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 21:50:45.998698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 21:50:46.004025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 21:50:46.004244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 21:50:46.004727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 21:50:46.005143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 21:50:46.005579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 21:50:46.006005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 21:50:46.006398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 21:50:46.006933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 21:50:46.007261 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 21:50:46.007905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 21:50:46.008433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 21:50:46.008836 | orchestrator | 2025-05-19 21:50:46.009057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:46.009570 | orchestrator | Monday 19 May 2025 21:50:45 +0000 (0:00:00.338) 0:00:17.879 ************ 2025-05-19 21:50:46.179170 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:46.179253 | orchestrator | 2025-05-19 21:50:46.182647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:46.182726 | orchestrator | Monday 19 May 2025 21:50:46 +0000 (0:00:00.178) 0:00:18.057 ************ 2025-05-19 21:50:46.647692 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:46.648843 | orchestrator | 2025-05-19 21:50:46.648874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:46.649605 | orchestrator | Monday 19 May 2025 21:50:46 +0000 (0:00:00.468) 0:00:18.525 ************ 2025-05-19 21:50:46.813826 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:46.815325 | orchestrator | 2025-05-19 21:50:46.817141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:46.818460 | orchestrator | Monday 19 May 2025 21:50:46 +0000 (0:00:00.166) 0:00:18.692 ************ 2025-05-19 21:50:46.985457 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:46.985646 | orchestrator | 2025-05-19 21:50:46.987544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:46.987573 | orchestrator | Monday 19 May 2025 21:50:46 +0000 (0:00:00.171) 0:00:18.864 ************ 2025-05-19 21:50:47.161474 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:47.162154 | orchestrator | 2025-05-19 21:50:47.163794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:47.165074 | orchestrator | Monday 19 May 2025 21:50:47 +0000 (0:00:00.174) 0:00:19.038 ************ 2025-05-19 21:50:47.349553 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:47.349637 | orchestrator | 2025-05-19 21:50:47.350603 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:47.350634 | orchestrator | Monday 19 May 2025 21:50:47 +0000 (0:00:00.188) 0:00:19.227 ************ 2025-05-19 21:50:47.523844 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:47.526466 | orchestrator | 2025-05-19 21:50:47.526575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:47.526845 | orchestrator | Monday 19 May 2025 21:50:47 +0000 (0:00:00.178) 0:00:19.405 ************ 2025-05-19 21:50:47.689527 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:47.689764 | orchestrator | 2025-05-19 21:50:47.690112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:47.690256 | orchestrator | Monday 19 May 2025 21:50:47 +0000 (0:00:00.165) 0:00:19.571 ************ 2025-05-19 21:50:48.250085 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 21:50:48.250515 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 21:50:48.250972 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 21:50:48.251233 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 21:50:48.251868 | orchestrator | 2025-05-19 21:50:48.252185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:48.252656 | orchestrator | Monday 19 May 2025 21:50:48 +0000 (0:00:00.557) 0:00:20.128 ************ 2025-05-19 21:50:48.441452 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:48.441532 | orchestrator | 2025-05-19 21:50:48.441783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:48.442459 | orchestrator | Monday 19 May 2025 21:50:48 +0000 (0:00:00.193) 0:00:20.322 ************ 2025-05-19 21:50:48.613785 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:48.616025 | orchestrator | 2025-05-19 21:50:48.616108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:48.617182 | orchestrator | Monday 19 May 2025 21:50:48 +0000 (0:00:00.170) 0:00:20.493 ************ 2025-05-19 21:50:48.782736 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:48.783942 | orchestrator | 2025-05-19 21:50:48.785471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:48.786388 | orchestrator | Monday 19 May 2025 21:50:48 +0000 (0:00:00.169) 0:00:20.662 ************ 2025-05-19 21:50:48.963760 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:48.963937 | orchestrator | 2025-05-19 21:50:48.964487 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 21:50:48.964757 | orchestrator | Monday 19 May 2025 21:50:48 +0000 (0:00:00.180) 0:00:20.842 ************ 2025-05-19 21:50:49.222375 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-19 21:50:49.226817 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-19 21:50:49.226850 | orchestrator | 2025-05-19 21:50:49.226863 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 21:50:49.227250 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.259) 0:00:21.101 ************ 2025-05-19 21:50:49.350125 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:49.350966 | orchestrator | 2025-05-19 21:50:49.351200 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 21:50:49.351470 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.126) 0:00:21.228 ************ 2025-05-19 21:50:49.467109 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:49.469366 | orchestrator | 2025-05-19 21:50:49.471836 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 21:50:49.472392 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.117) 0:00:21.346 ************ 2025-05-19 21:50:49.589486 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:49.594449 | orchestrator | 2025-05-19 21:50:49.595909 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 21:50:49.598396 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.122) 0:00:21.469 ************ 2025-05-19 21:50:49.738288 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:49.742263 | orchestrator | 2025-05-19 21:50:49.747900 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 21:50:49.747933 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.137) 0:00:21.607 ************ 2025-05-19 21:50:49.882975 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2161015-9b2d-55ef-85cd-b20f941db83a'}}) 2025-05-19 21:50:49.883060 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73ec3cc1-218e-51bb-a362-2e871742ea52'}}) 2025-05-19 21:50:49.884068 | orchestrator | 2025-05-19 21:50:49.884824 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 21:50:49.885901 | orchestrator | Monday 19 May 2025 21:50:49 +0000 (0:00:00.152) 0:00:21.759 ************ 2025-05-19 21:50:50.013281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2161015-9b2d-55ef-85cd-b20f941db83a'}})  2025-05-19 21:50:50.013414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73ec3cc1-218e-51bb-a362-2e871742ea52'}})  2025-05-19 21:50:50.013429 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:50.013521 | orchestrator | 2025-05-19 21:50:50.014366 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 21:50:50.014515 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.132) 0:00:21.892 ************ 2025-05-19 21:50:50.160274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2161015-9b2d-55ef-85cd-b20f941db83a'}})  2025-05-19 21:50:50.160764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73ec3cc1-218e-51bb-a362-2e871742ea52'}})  2025-05-19 21:50:50.161255 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:50.162192 | orchestrator | 2025-05-19 21:50:50.162471 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 21:50:50.163189 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.139) 0:00:22.031 ************ 2025-05-19 21:50:50.287779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2161015-9b2d-55ef-85cd-b20f941db83a'}})  2025-05-19 21:50:50.289259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73ec3cc1-218e-51bb-a362-2e871742ea52'}})  2025-05-19 21:50:50.291240 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:50.291489 | orchestrator | 2025-05-19 21:50:50.292078 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 21:50:50.292252 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.133) 0:00:22.165 ************ 2025-05-19 21:50:50.405123 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:50.405780 | orchestrator | 2025-05-19 21:50:50.406680 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 21:50:50.410229 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.119) 0:00:22.285 ************ 2025-05-19 21:50:50.524671 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:50:50.524898 | orchestrator | 2025-05-19 21:50:50.525840 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 21:50:50.526283 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.116) 0:00:22.401 ************ 2025-05-19 21:50:50.644459 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:50.650178 | orchestrator | 2025-05-19 21:50:50.650484 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 21:50:50.653768 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.120) 0:00:22.522 ************ 2025-05-19 21:50:50.890483 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:50.892287 | orchestrator | 2025-05-19 21:50:50.892330 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 21:50:50.892445 | orchestrator | Monday 19 May 2025 21:50:50 +0000 (0:00:00.247) 0:00:22.769 ************ 2025-05-19 21:50:51.019649 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:51.020153 | orchestrator | 2025-05-19 21:50:51.023202 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 21:50:51.023235 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.131) 0:00:22.901 ************ 2025-05-19 21:50:51.136062 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:50:51.136540 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:50:51.136951 | orchestrator |  "sdb": { 2025-05-19 21:50:51.137707 | orchestrator |  "osd_lvm_uuid": "d2161015-9b2d-55ef-85cd-b20f941db83a" 2025-05-19 21:50:51.139057 | orchestrator |  }, 2025-05-19 21:50:51.139629 | orchestrator |  "sdc": { 2025-05-19 21:50:51.140491 | orchestrator |  "osd_lvm_uuid": "73ec3cc1-218e-51bb-a362-2e871742ea52" 2025-05-19 21:50:51.142945 | orchestrator |  } 2025-05-19 21:50:51.143278 | orchestrator |  } 2025-05-19 21:50:51.143700 | orchestrator | } 2025-05-19 21:50:51.144116 | orchestrator | 2025-05-19 21:50:51.145954 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 21:50:51.146292 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.115) 0:00:23.016 ************ 2025-05-19 21:50:51.249138 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:51.249227 | orchestrator | 2025-05-19 21:50:51.250183 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 21:50:51.250460 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.113) 0:00:23.130 ************ 2025-05-19 21:50:51.360204 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:51.360315 | orchestrator | 2025-05-19 21:50:51.360542 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 21:50:51.361880 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.110) 0:00:23.241 ************ 2025-05-19 21:50:51.487006 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:50:51.487427 | orchestrator | 2025-05-19 21:50:51.489213 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 21:50:51.490368 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.125) 0:00:23.366 ************ 2025-05-19 21:50:51.704893 | orchestrator | changed: [testbed-node-4] => { 2025-05-19 21:50:51.706993 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 21:50:51.708037 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:50:51.709404 | orchestrator |  "sdb": { 2025-05-19 21:50:51.711571 | orchestrator |  "osd_lvm_uuid": "d2161015-9b2d-55ef-85cd-b20f941db83a" 2025-05-19 21:50:51.714495 | orchestrator |  }, 2025-05-19 21:50:51.717098 | orchestrator |  "sdc": { 2025-05-19 21:50:51.717201 | orchestrator |  "osd_lvm_uuid": "73ec3cc1-218e-51bb-a362-2e871742ea52" 2025-05-19 21:50:51.717216 | orchestrator |  } 2025-05-19 21:50:51.718781 | orchestrator |  }, 2025-05-19 21:50:51.721289 | orchestrator |  "lvm_volumes": [ 2025-05-19 21:50:51.721718 | orchestrator |  { 2025-05-19 21:50:51.721928 | orchestrator |  "data": "osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a", 2025-05-19 21:50:51.724228 | orchestrator |  "data_vg": "ceph-d2161015-9b2d-55ef-85cd-b20f941db83a" 2025-05-19 21:50:51.724373 | orchestrator |  }, 2025-05-19 21:50:51.724716 | orchestrator |  { 2025-05-19 21:50:51.725206 | orchestrator |  "data": "osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52", 2025-05-19 21:50:51.725302 | orchestrator |  "data_vg": "ceph-73ec3cc1-218e-51bb-a362-2e871742ea52" 2025-05-19 21:50:51.725690 | orchestrator |  } 2025-05-19 21:50:51.726090 | orchestrator |  ] 2025-05-19 21:50:51.726384 | orchestrator |  } 2025-05-19 21:50:51.728045 | orchestrator | } 2025-05-19 21:50:51.728129 | orchestrator | 2025-05-19 21:50:51.728423 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 21:50:51.728613 | orchestrator | Monday 19 May 2025 21:50:51 +0000 (0:00:00.214) 0:00:23.580 ************ 2025-05-19 21:50:52.751906 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 21:50:52.753934 | orchestrator | 2025-05-19 21:50:52.756731 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-19 21:50:52.757293 | orchestrator | 2025-05-19 21:50:52.760134 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:50:52.764863 | orchestrator | Monday 19 May 2025 21:50:52 +0000 (0:00:01.049) 0:00:24.630 ************ 2025-05-19 21:50:53.335653 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 21:50:53.335799 | orchestrator | 2025-05-19 21:50:53.335882 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:50:53.339324 | orchestrator | Monday 19 May 2025 21:50:53 +0000 (0:00:00.583) 0:00:25.213 ************ 2025-05-19 21:50:54.131649 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:50:54.132574 | orchestrator | 2025-05-19 21:50:54.133692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:54.139920 | orchestrator | Monday 19 May 2025 21:50:54 +0000 (0:00:00.797) 0:00:26.010 ************ 2025-05-19 21:50:54.562987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 21:50:54.564543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 21:50:54.568098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 21:50:54.568131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 21:50:54.568741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 21:50:54.572331 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 21:50:54.573199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 21:50:54.574378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 21:50:54.575876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 21:50:54.577061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 21:50:54.578124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 21:50:54.579162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 21:50:54.580316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 21:50:54.581318 | orchestrator | 2025-05-19 21:50:54.582060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:54.582808 | orchestrator | Monday 19 May 2025 21:50:54 +0000 (0:00:00.429) 0:00:26.440 ************ 2025-05-19 21:50:54.768123 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:54.770255 | orchestrator | 2025-05-19 21:50:54.771636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:54.771733 | orchestrator | Monday 19 May 2025 21:50:54 +0000 (0:00:00.206) 0:00:26.646 ************ 2025-05-19 21:50:54.977607 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:54.979081 | orchestrator | 2025-05-19 21:50:54.979756 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:54.983409 | orchestrator | Monday 19 May 2025 21:50:54 +0000 (0:00:00.208) 0:00:26.855 ************ 2025-05-19 21:50:55.213876 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:55.213984 | orchestrator | 2025-05-19 21:50:55.214001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:55.214201 | orchestrator | Monday 19 May 2025 21:50:55 +0000 (0:00:00.229) 0:00:27.084 ************ 2025-05-19 21:50:55.396225 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:55.397955 | orchestrator | 2025-05-19 21:50:55.402122 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:55.402167 | orchestrator | Monday 19 May 2025 21:50:55 +0000 (0:00:00.191) 0:00:27.275 ************ 2025-05-19 21:50:55.613481 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:55.615294 | orchestrator | 2025-05-19 21:50:55.617210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:55.618828 | orchestrator | Monday 19 May 2025 21:50:55 +0000 (0:00:00.215) 0:00:27.492 ************ 2025-05-19 21:50:55.831694 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:55.832232 | orchestrator | 2025-05-19 21:50:55.834110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:55.836634 | orchestrator | Monday 19 May 2025 21:50:55 +0000 (0:00:00.215) 0:00:27.708 ************ 2025-05-19 21:50:56.006324 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:56.007404 | orchestrator | 2025-05-19 21:50:56.007493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:56.008594 | orchestrator | Monday 19 May 2025 21:50:55 +0000 (0:00:00.174) 0:00:27.883 ************ 2025-05-19 21:50:56.190127 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:56.192588 | orchestrator | 2025-05-19 21:50:56.194126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:56.194841 | orchestrator | Monday 19 May 2025 21:50:56 +0000 (0:00:00.184) 0:00:28.067 ************ 2025-05-19 21:50:56.789893 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397) 2025-05-19 21:50:56.792659 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397) 2025-05-19 21:50:56.793499 | orchestrator | 2025-05-19 21:50:56.796332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:56.797250 | orchestrator | Monday 19 May 2025 21:50:56 +0000 (0:00:00.598) 0:00:28.666 ************ 2025-05-19 21:50:57.667665 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70) 2025-05-19 21:50:57.667775 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70) 2025-05-19 21:50:57.667790 | orchestrator | 2025-05-19 21:50:57.667803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:57.670081 | orchestrator | Monday 19 May 2025 21:50:57 +0000 (0:00:00.876) 0:00:29.543 ************ 2025-05-19 21:50:58.216613 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6) 2025-05-19 21:50:58.223662 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6) 2025-05-19 21:50:58.226296 | orchestrator | 2025-05-19 21:50:58.231403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:58.232209 | orchestrator | Monday 19 May 2025 21:50:58 +0000 (0:00:00.551) 0:00:30.094 ************ 2025-05-19 21:50:58.782101 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba) 2025-05-19 21:50:58.783383 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba) 2025-05-19 21:50:58.784657 | orchestrator | 2025-05-19 21:50:58.786265 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:50:58.787598 | orchestrator | Monday 19 May 2025 21:50:58 +0000 (0:00:00.566) 0:00:30.661 ************ 2025-05-19 21:50:59.141975 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:50:59.199976 | orchestrator | 2025-05-19 21:50:59.200062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:59.200076 | orchestrator | Monday 19 May 2025 21:50:59 +0000 (0:00:00.360) 0:00:31.021 ************ 2025-05-19 21:50:59.553168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 21:50:59.553927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 21:50:59.557240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 21:50:59.557272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 21:50:59.557284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 21:50:59.557873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 21:50:59.559032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 21:50:59.560018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 21:50:59.562801 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 21:50:59.563029 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 21:50:59.564634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 21:50:59.564664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 21:50:59.565056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 21:50:59.565837 | orchestrator | 2025-05-19 21:50:59.566924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:59.567294 | orchestrator | Monday 19 May 2025 21:50:59 +0000 (0:00:00.408) 0:00:31.430 ************ 2025-05-19 21:50:59.747519 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:59.747630 | orchestrator | 2025-05-19 21:50:59.747645 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:59.747658 | orchestrator | Monday 19 May 2025 21:50:59 +0000 (0:00:00.193) 0:00:31.623 ************ 2025-05-19 21:50:59.952596 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:50:59.952951 | orchestrator | 2025-05-19 21:50:59.953377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:50:59.954247 | orchestrator | Monday 19 May 2025 21:50:59 +0000 (0:00:00.207) 0:00:31.830 ************ 2025-05-19 21:51:00.153418 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:00.154119 | orchestrator | 2025-05-19 21:51:00.154992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:00.155977 | orchestrator | Monday 19 May 2025 21:51:00 +0000 (0:00:00.202) 0:00:32.033 ************ 2025-05-19 21:51:00.348255 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:00.349140 | orchestrator | 2025-05-19 21:51:00.351559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:00.352075 | orchestrator | Monday 19 May 2025 21:51:00 +0000 (0:00:00.192) 0:00:32.226 ************ 2025-05-19 21:51:00.542594 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:00.543592 | orchestrator | 2025-05-19 21:51:00.544218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:00.544974 | orchestrator | Monday 19 May 2025 21:51:00 +0000 (0:00:00.195) 0:00:32.421 ************ 2025-05-19 21:51:01.137539 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:01.139174 | orchestrator | 2025-05-19 21:51:01.140690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:01.142059 | orchestrator | Monday 19 May 2025 21:51:01 +0000 (0:00:00.595) 0:00:33.016 ************ 2025-05-19 21:51:01.336649 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:01.337606 | orchestrator | 2025-05-19 21:51:01.340001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:01.340827 | orchestrator | Monday 19 May 2025 21:51:01 +0000 (0:00:00.197) 0:00:33.214 ************ 2025-05-19 21:51:01.564569 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:01.565510 | orchestrator | 2025-05-19 21:51:01.566804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:01.568053 | orchestrator | Monday 19 May 2025 21:51:01 +0000 (0:00:00.228) 0:00:33.443 ************ 2025-05-19 21:51:02.206179 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 21:51:02.206446 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 21:51:02.207091 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 21:51:02.208180 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 21:51:02.209214 | orchestrator | 2025-05-19 21:51:02.210883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:02.211932 | orchestrator | Monday 19 May 2025 21:51:02 +0000 (0:00:00.642) 0:00:34.085 ************ 2025-05-19 21:51:02.391633 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:02.392149 | orchestrator | 2025-05-19 21:51:02.393838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:02.393865 | orchestrator | Monday 19 May 2025 21:51:02 +0000 (0:00:00.184) 0:00:34.269 ************ 2025-05-19 21:51:02.588208 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:02.588305 | orchestrator | 2025-05-19 21:51:02.588812 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:02.589173 | orchestrator | Monday 19 May 2025 21:51:02 +0000 (0:00:00.196) 0:00:34.465 ************ 2025-05-19 21:51:02.788771 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:02.789238 | orchestrator | 2025-05-19 21:51:02.790057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:51:02.790768 | orchestrator | Monday 19 May 2025 21:51:02 +0000 (0:00:00.202) 0:00:34.668 ************ 2025-05-19 21:51:02.981824 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:02.982497 | orchestrator | 2025-05-19 21:51:02.982531 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-19 21:51:02.982931 | orchestrator | Monday 19 May 2025 21:51:02 +0000 (0:00:00.192) 0:00:34.860 ************ 2025-05-19 21:51:03.155535 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-19 21:51:03.156188 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-19 21:51:03.156950 | orchestrator | 2025-05-19 21:51:03.159389 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-19 21:51:03.159628 | orchestrator | Monday 19 May 2025 21:51:03 +0000 (0:00:00.173) 0:00:35.034 ************ 2025-05-19 21:51:03.286280 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:03.286560 | orchestrator | 2025-05-19 21:51:03.287361 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-19 21:51:03.287941 | orchestrator | Monday 19 May 2025 21:51:03 +0000 (0:00:00.131) 0:00:35.166 ************ 2025-05-19 21:51:03.416690 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:03.416910 | orchestrator | 2025-05-19 21:51:03.418157 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-19 21:51:03.419831 | orchestrator | Monday 19 May 2025 21:51:03 +0000 (0:00:00.129) 0:00:35.296 ************ 2025-05-19 21:51:03.546797 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:03.547326 | orchestrator | 2025-05-19 21:51:03.548527 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-19 21:51:03.549262 | orchestrator | Monday 19 May 2025 21:51:03 +0000 (0:00:00.128) 0:00:35.424 ************ 2025-05-19 21:51:03.856849 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:51:03.857392 | orchestrator | 2025-05-19 21:51:03.857764 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-19 21:51:03.858958 | orchestrator | Monday 19 May 2025 21:51:03 +0000 (0:00:00.311) 0:00:35.736 ************ 2025-05-19 21:51:04.025320 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd6c00661-cf2a-5067-a507-d2ca4df6447b'}}) 2025-05-19 21:51:04.026480 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}}) 2025-05-19 21:51:04.026982 | orchestrator | 2025-05-19 21:51:04.028687 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-19 21:51:04.028715 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.168) 0:00:35.904 ************ 2025-05-19 21:51:04.177073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd6c00661-cf2a-5067-a507-d2ca4df6447b'}})  2025-05-19 21:51:04.178260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}})  2025-05-19 21:51:04.179462 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:04.180451 | orchestrator | 2025-05-19 21:51:04.181136 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-19 21:51:04.182053 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.152) 0:00:36.056 ************ 2025-05-19 21:51:04.330911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd6c00661-cf2a-5067-a507-d2ca4df6447b'}})  2025-05-19 21:51:04.331967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}})  2025-05-19 21:51:04.332244 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:04.333179 | orchestrator | 2025-05-19 21:51:04.334595 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-19 21:51:04.334635 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.153) 0:00:36.210 ************ 2025-05-19 21:51:04.482897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd6c00661-cf2a-5067-a507-d2ca4df6447b'}})  2025-05-19 21:51:04.483010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}})  2025-05-19 21:51:04.483606 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:04.484240 | orchestrator | 2025-05-19 21:51:04.484879 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-19 21:51:04.485316 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.150) 0:00:36.360 ************ 2025-05-19 21:51:04.638357 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:51:04.638457 | orchestrator | 2025-05-19 21:51:04.638471 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-19 21:51:04.638484 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.152) 0:00:36.513 ************ 2025-05-19 21:51:04.783538 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:51:04.783927 | orchestrator | 2025-05-19 21:51:04.785138 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-19 21:51:04.786097 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.149) 0:00:36.662 ************ 2025-05-19 21:51:04.908176 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:04.908849 | orchestrator | 2025-05-19 21:51:04.909174 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-19 21:51:04.910228 | orchestrator | Monday 19 May 2025 21:51:04 +0000 (0:00:00.125) 0:00:36.788 ************ 2025-05-19 21:51:05.052788 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:05.052891 | orchestrator | 2025-05-19 21:51:05.053986 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-19 21:51:05.054977 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.142) 0:00:36.931 ************ 2025-05-19 21:51:05.182619 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:05.185110 | orchestrator | 2025-05-19 21:51:05.185143 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-19 21:51:05.186186 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.131) 0:00:37.062 ************ 2025-05-19 21:51:05.319081 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:51:05.320701 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:51:05.321737 | orchestrator |  "sdb": { 2025-05-19 21:51:05.323726 | orchestrator |  "osd_lvm_uuid": "d6c00661-cf2a-5067-a507-d2ca4df6447b" 2025-05-19 21:51:05.324276 | orchestrator |  }, 2025-05-19 21:51:05.324428 | orchestrator |  "sdc": { 2025-05-19 21:51:05.325881 | orchestrator |  "osd_lvm_uuid": "cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8" 2025-05-19 21:51:05.327914 | orchestrator |  } 2025-05-19 21:51:05.327961 | orchestrator |  } 2025-05-19 21:51:05.328503 | orchestrator | } 2025-05-19 21:51:05.329284 | orchestrator | 2025-05-19 21:51:05.329943 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-19 21:51:05.330772 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.132) 0:00:37.194 ************ 2025-05-19 21:51:05.438546 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:05.438746 | orchestrator | 2025-05-19 21:51:05.438979 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-19 21:51:05.439441 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.122) 0:00:37.317 ************ 2025-05-19 21:51:05.768558 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:05.769359 | orchestrator | 2025-05-19 21:51:05.770420 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-19 21:51:05.771878 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.330) 0:00:37.647 ************ 2025-05-19 21:51:05.890957 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:51:05.892837 | orchestrator | 2025-05-19 21:51:05.893992 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-19 21:51:05.894457 | orchestrator | Monday 19 May 2025 21:51:05 +0000 (0:00:00.122) 0:00:37.770 ************ 2025-05-19 21:51:06.091167 | orchestrator | changed: [testbed-node-5] => { 2025-05-19 21:51:06.091598 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-19 21:51:06.092509 | orchestrator |  "ceph_osd_devices": { 2025-05-19 21:51:06.093433 | orchestrator |  "sdb": { 2025-05-19 21:51:06.095974 | orchestrator |  "osd_lvm_uuid": "d6c00661-cf2a-5067-a507-d2ca4df6447b" 2025-05-19 21:51:06.097097 | orchestrator |  }, 2025-05-19 21:51:06.097688 | orchestrator |  "sdc": { 2025-05-19 21:51:06.098544 | orchestrator |  "osd_lvm_uuid": "cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8" 2025-05-19 21:51:06.099213 | orchestrator |  } 2025-05-19 21:51:06.099679 | orchestrator |  }, 2025-05-19 21:51:06.100271 | orchestrator |  "lvm_volumes": [ 2025-05-19 21:51:06.100902 | orchestrator |  { 2025-05-19 21:51:06.101442 | orchestrator |  "data": "osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b", 2025-05-19 21:51:06.102102 | orchestrator |  "data_vg": "ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b" 2025-05-19 21:51:06.102517 | orchestrator |  }, 2025-05-19 21:51:06.103058 | orchestrator |  { 2025-05-19 21:51:06.103851 | orchestrator |  "data": "osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8", 2025-05-19 21:51:06.104204 | orchestrator |  "data_vg": "ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8" 2025-05-19 21:51:06.105201 | orchestrator |  } 2025-05-19 21:51:06.105462 | orchestrator |  ] 2025-05-19 21:51:06.105952 | orchestrator |  } 2025-05-19 21:51:06.106408 | orchestrator | } 2025-05-19 21:51:06.107271 | orchestrator | 2025-05-19 21:51:06.107647 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-19 21:51:06.107844 | orchestrator | Monday 19 May 2025 21:51:06 +0000 (0:00:00.200) 0:00:37.970 ************ 2025-05-19 21:51:07.053711 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 21:51:07.053824 | orchestrator | 2025-05-19 21:51:07.060392 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:51:07.060500 | orchestrator | 2025-05-19 21:51:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:51:07.060521 | orchestrator | 2025-05-19 21:51:07 | INFO  | Please wait and do not abort execution. 2025-05-19 21:51:07.062181 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 21:51:07.064152 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 21:51:07.064995 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 21:51:07.065943 | orchestrator | 2025-05-19 21:51:07.066989 | orchestrator | 2025-05-19 21:51:07.067143 | orchestrator | 2025-05-19 21:51:07.068239 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:51:07.070110 | orchestrator | Monday 19 May 2025 21:51:07 +0000 (0:00:00.959) 0:00:38.930 ************ 2025-05-19 21:51:07.070775 | orchestrator | =============================================================================== 2025-05-19 21:51:07.071001 | orchestrator | Write configuration file ------------------------------------------------ 3.74s 2025-05-19 21:51:07.071837 | orchestrator | Get initial list of available block devices ----------------------------- 1.29s 2025-05-19 21:51:07.072642 | orchestrator | Add known partitions to the list of available block devices ------------- 1.10s 2025-05-19 21:51:07.075004 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-05-19 21:51:07.076480 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.04s 2025-05-19 21:51:07.076517 | orchestrator | Add known partitions to the list of available block devices ------------- 1.03s 2025-05-19 21:51:07.077163 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-05-19 21:51:07.078747 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-19 21:51:07.078983 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-19 21:51:07.081280 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.64s 2025-05-19 21:51:07.081313 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-05-19 21:51:07.081428 | orchestrator | Add known partitions to the list of available block devices ------------- 0.60s 2025-05-19 21:51:07.082326 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-05-19 21:51:07.083130 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.58s 2025-05-19 21:51:07.083772 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-05-19 21:51:07.084569 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.56s 2025-05-19 21:51:07.085206 | orchestrator | Add known partitions to the list of available block devices ------------- 0.56s 2025-05-19 21:51:07.085986 | orchestrator | Add known links to the list of available block devices ------------------ 0.55s 2025-05-19 21:51:07.086562 | orchestrator | Print DB devices -------------------------------------------------------- 0.55s 2025-05-19 21:51:07.088970 | orchestrator | Set WAL devices config data --------------------------------------------- 0.53s 2025-05-19 21:51:19.582525 | orchestrator | 2025-05-19 21:51:19 | INFO  | Task f8110d44-751d-475b-a745-e03a2cf066f7 (sync inventory) is running in background. Output coming soon. 2025-05-19 21:52:02.654074 | orchestrator | 2025-05-19 21:51:46 | INFO  | Starting group_vars file reorganization 2025-05-19 21:52:02.654185 | orchestrator | 2025-05-19 21:51:46 | INFO  | Moved 0 file(s) to their respective directories 2025-05-19 21:52:02.654202 | orchestrator | 2025-05-19 21:51:46 | INFO  | Group_vars file reorganization completed 2025-05-19 21:52:02.654213 | orchestrator | 2025-05-19 21:51:48 | INFO  | Starting variable preparation from inventory 2025-05-19 21:52:02.654224 | orchestrator | 2025-05-19 21:51:49 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-19 21:52:02.654236 | orchestrator | 2025-05-19 21:51:49 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-19 21:52:02.654247 | orchestrator | 2025-05-19 21:51:49 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-19 21:52:02.654258 | orchestrator | 2025-05-19 21:51:49 | INFO  | 3 file(s) written, 6 host(s) processed 2025-05-19 21:52:02.654269 | orchestrator | 2025-05-19 21:51:49 | INFO  | Variable preparation completed: 2025-05-19 21:52:02.654281 | orchestrator | 2025-05-19 21:51:50 | INFO  | Starting inventory overwrite handling 2025-05-19 21:52:02.654291 | orchestrator | 2025-05-19 21:51:50 | INFO  | Handling group overwrites in 99-overwrite 2025-05-19 21:52:02.654302 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group frr:children from 60-generic 2025-05-19 21:52:02.654404 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group storage:children from 50-kolla 2025-05-19 21:52:02.654418 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-19 21:52:02.654429 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-19 21:52:02.654440 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-19 21:52:02.654451 | orchestrator | 2025-05-19 21:51:50 | INFO  | Handling group overwrites in 20-roles 2025-05-19 21:52:02.654462 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-19 21:52:02.654473 | orchestrator | 2025-05-19 21:51:50 | INFO  | Removed 6 group(s) in total 2025-05-19 21:52:02.654485 | orchestrator | 2025-05-19 21:51:50 | INFO  | Inventory overwrite handling completed 2025-05-19 21:52:02.654496 | orchestrator | 2025-05-19 21:51:51 | INFO  | Starting merge of inventory files 2025-05-19 21:52:02.654522 | orchestrator | 2025-05-19 21:51:51 | INFO  | Inventory files merged successfully 2025-05-19 21:52:02.654545 | orchestrator | 2025-05-19 21:51:54 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-05-19 21:52:02.654557 | orchestrator | 2025-05-19 21:52:01 | INFO  | Successfully wrote ClusterShell configuration 2025-05-19 21:52:02.654571 | orchestrator | [master d895d45] 2025-05-19-21-52 2025-05-19 21:52:02.654584 | orchestrator | 1 file changed, 1090 deletions(-) 2025-05-19 21:52:02.654598 | orchestrator | delete mode 100644 clustershell/ansible.yaml 2025-05-19 21:52:04.614462 | orchestrator | 2025-05-19 21:52:04 | INFO  | Task 8ca1eaa2-9485-42a6-b7f3-ff4bf99f404d (ceph-create-lvm-devices) was prepared for execution. 2025-05-19 21:52:04.614554 | orchestrator | 2025-05-19 21:52:04 | INFO  | It takes a moment until task 8ca1eaa2-9485-42a6-b7f3-ff4bf99f404d (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-19 21:52:08.276649 | orchestrator | 2025-05-19 21:52:08.278348 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 21:52:08.279162 | orchestrator | 2025-05-19 21:52:08.280040 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:52:08.280826 | orchestrator | Monday 19 May 2025 21:52:08 +0000 (0:00:00.230) 0:00:00.230 ************ 2025-05-19 21:52:08.489442 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 21:52:08.489527 | orchestrator | 2025-05-19 21:52:08.489774 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:52:08.490008 | orchestrator | Monday 19 May 2025 21:52:08 +0000 (0:00:00.217) 0:00:00.448 ************ 2025-05-19 21:52:08.685831 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:08.687937 | orchestrator | 2025-05-19 21:52:08.688269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:08.689208 | orchestrator | Monday 19 May 2025 21:52:08 +0000 (0:00:00.195) 0:00:00.643 ************ 2025-05-19 21:52:08.983769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-19 21:52:08.984287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-19 21:52:08.985788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-19 21:52:08.986077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-19 21:52:08.986913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-19 21:52:08.987162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-19 21:52:08.987874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-19 21:52:08.988207 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-19 21:52:08.988719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-19 21:52:08.989835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-19 21:52:08.990113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-19 21:52:08.990722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-19 21:52:08.991787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-19 21:52:08.992760 | orchestrator | 2025-05-19 21:52:08.993405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:08.994126 | orchestrator | Monday 19 May 2025 21:52:08 +0000 (0:00:00.299) 0:00:00.943 ************ 2025-05-19 21:52:09.301722 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:09.302484 | orchestrator | 2025-05-19 21:52:09.305754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:09.305785 | orchestrator | Monday 19 May 2025 21:52:09 +0000 (0:00:00.316) 0:00:01.260 ************ 2025-05-19 21:52:09.473394 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:09.479650 | orchestrator | 2025-05-19 21:52:09.479707 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:09.479721 | orchestrator | Monday 19 May 2025 21:52:09 +0000 (0:00:00.172) 0:00:01.432 ************ 2025-05-19 21:52:09.639464 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:09.640251 | orchestrator | 2025-05-19 21:52:09.641202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:09.643987 | orchestrator | Monday 19 May 2025 21:52:09 +0000 (0:00:00.165) 0:00:01.597 ************ 2025-05-19 21:52:09.795416 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:09.795495 | orchestrator | 2025-05-19 21:52:09.795731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:09.796008 | orchestrator | Monday 19 May 2025 21:52:09 +0000 (0:00:00.156) 0:00:01.753 ************ 2025-05-19 21:52:09.965388 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:09.966109 | orchestrator | 2025-05-19 21:52:09.967003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:09.968167 | orchestrator | Monday 19 May 2025 21:52:09 +0000 (0:00:00.169) 0:00:01.923 ************ 2025-05-19 21:52:10.129080 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:10.129269 | orchestrator | 2025-05-19 21:52:10.130115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:10.132043 | orchestrator | Monday 19 May 2025 21:52:10 +0000 (0:00:00.164) 0:00:02.088 ************ 2025-05-19 21:52:10.306548 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:10.306708 | orchestrator | 2025-05-19 21:52:10.306990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:10.307223 | orchestrator | Monday 19 May 2025 21:52:10 +0000 (0:00:00.178) 0:00:02.266 ************ 2025-05-19 21:52:10.469526 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:10.469687 | orchestrator | 2025-05-19 21:52:10.470116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:10.470852 | orchestrator | Monday 19 May 2025 21:52:10 +0000 (0:00:00.162) 0:00:02.429 ************ 2025-05-19 21:52:10.860640 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f) 2025-05-19 21:52:10.860872 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f) 2025-05-19 21:52:10.861476 | orchestrator | 2025-05-19 21:52:10.861932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:10.862643 | orchestrator | Monday 19 May 2025 21:52:10 +0000 (0:00:00.389) 0:00:02.818 ************ 2025-05-19 21:52:11.222934 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314) 2025-05-19 21:52:11.223043 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314) 2025-05-19 21:52:11.223647 | orchestrator | 2025-05-19 21:52:11.225933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:11.226091 | orchestrator | Monday 19 May 2025 21:52:11 +0000 (0:00:00.363) 0:00:03.182 ************ 2025-05-19 21:52:11.719261 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d) 2025-05-19 21:52:11.719482 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d) 2025-05-19 21:52:11.719781 | orchestrator | 2025-05-19 21:52:11.720895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:11.721625 | orchestrator | Monday 19 May 2025 21:52:11 +0000 (0:00:00.495) 0:00:03.678 ************ 2025-05-19 21:52:12.359861 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261) 2025-05-19 21:52:12.361251 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261) 2025-05-19 21:52:12.361664 | orchestrator | 2025-05-19 21:52:12.365790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:12.365834 | orchestrator | Monday 19 May 2025 21:52:12 +0000 (0:00:00.640) 0:00:04.319 ************ 2025-05-19 21:52:12.664743 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:52:12.665621 | orchestrator | 2025-05-19 21:52:12.669749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:12.669792 | orchestrator | Monday 19 May 2025 21:52:12 +0000 (0:00:00.304) 0:00:04.623 ************ 2025-05-19 21:52:13.039199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-19 21:52:13.040347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-19 21:52:13.041684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-19 21:52:13.043186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-19 21:52:13.044791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-19 21:52:13.045962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-19 21:52:13.046911 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-19 21:52:13.047677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-19 21:52:13.048718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-19 21:52:13.049547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-19 21:52:13.050194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-19 21:52:13.051007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-19 21:52:13.051779 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-19 21:52:13.052459 | orchestrator | 2025-05-19 21:52:13.053225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.053661 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.374) 0:00:04.998 ************ 2025-05-19 21:52:13.207574 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:13.208485 | orchestrator | 2025-05-19 21:52:13.214661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.214696 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.168) 0:00:05.166 ************ 2025-05-19 21:52:13.382084 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:13.382455 | orchestrator | 2025-05-19 21:52:13.385111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.385729 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.173) 0:00:05.339 ************ 2025-05-19 21:52:13.573048 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:13.573120 | orchestrator | 2025-05-19 21:52:13.573133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.574709 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.182) 0:00:05.522 ************ 2025-05-19 21:52:13.744933 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:13.745026 | orchestrator | 2025-05-19 21:52:13.745041 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.745054 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.176) 0:00:05.699 ************ 2025-05-19 21:52:13.936387 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:13.936553 | orchestrator | 2025-05-19 21:52:13.937057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:13.937484 | orchestrator | Monday 19 May 2025 21:52:13 +0000 (0:00:00.196) 0:00:05.896 ************ 2025-05-19 21:52:14.104756 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:14.104863 | orchestrator | 2025-05-19 21:52:14.104962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:14.105421 | orchestrator | Monday 19 May 2025 21:52:14 +0000 (0:00:00.165) 0:00:06.061 ************ 2025-05-19 21:52:14.276146 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:14.276377 | orchestrator | 2025-05-19 21:52:14.277484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:14.278112 | orchestrator | Monday 19 May 2025 21:52:14 +0000 (0:00:00.173) 0:00:06.234 ************ 2025-05-19 21:52:14.465403 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:14.466888 | orchestrator | 2025-05-19 21:52:14.467611 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:14.468758 | orchestrator | Monday 19 May 2025 21:52:14 +0000 (0:00:00.186) 0:00:06.421 ************ 2025-05-19 21:52:15.522408 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-19 21:52:15.523285 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-19 21:52:15.524205 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-19 21:52:15.524278 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-19 21:52:15.524294 | orchestrator | 2025-05-19 21:52:15.524331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:15.525257 | orchestrator | Monday 19 May 2025 21:52:15 +0000 (0:00:01.055) 0:00:07.476 ************ 2025-05-19 21:52:15.726355 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:15.726939 | orchestrator | 2025-05-19 21:52:15.727956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:15.729890 | orchestrator | Monday 19 May 2025 21:52:15 +0000 (0:00:00.207) 0:00:07.684 ************ 2025-05-19 21:52:15.905012 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:15.905840 | orchestrator | 2025-05-19 21:52:15.909538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:15.909643 | orchestrator | Monday 19 May 2025 21:52:15 +0000 (0:00:00.179) 0:00:07.863 ************ 2025-05-19 21:52:16.083963 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:16.084501 | orchestrator | 2025-05-19 21:52:16.085595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:16.086551 | orchestrator | Monday 19 May 2025 21:52:16 +0000 (0:00:00.178) 0:00:08.042 ************ 2025-05-19 21:52:16.272873 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:16.273458 | orchestrator | 2025-05-19 21:52:16.274162 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 21:52:16.274946 | orchestrator | Monday 19 May 2025 21:52:16 +0000 (0:00:00.189) 0:00:08.231 ************ 2025-05-19 21:52:16.403125 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:16.403471 | orchestrator | 2025-05-19 21:52:16.403910 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 21:52:16.404478 | orchestrator | Monday 19 May 2025 21:52:16 +0000 (0:00:00.130) 0:00:08.362 ************ 2025-05-19 21:52:16.571649 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}}) 2025-05-19 21:52:16.573509 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}}) 2025-05-19 21:52:16.573542 | orchestrator | 2025-05-19 21:52:16.574152 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 21:52:16.575132 | orchestrator | Monday 19 May 2025 21:52:16 +0000 (0:00:00.165) 0:00:08.527 ************ 2025-05-19 21:52:18.525664 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}) 2025-05-19 21:52:18.526721 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}) 2025-05-19 21:52:18.529185 | orchestrator | 2025-05-19 21:52:18.529901 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 21:52:18.530811 | orchestrator | Monday 19 May 2025 21:52:18 +0000 (0:00:01.952) 0:00:10.479 ************ 2025-05-19 21:52:18.681714 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:18.681811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:18.682170 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:18.683185 | orchestrator | 2025-05-19 21:52:18.683932 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 21:52:18.684709 | orchestrator | Monday 19 May 2025 21:52:18 +0000 (0:00:00.158) 0:00:10.638 ************ 2025-05-19 21:52:20.104543 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}) 2025-05-19 21:52:20.104678 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}) 2025-05-19 21:52:20.105286 | orchestrator | 2025-05-19 21:52:20.105727 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 21:52:20.106470 | orchestrator | Monday 19 May 2025 21:52:20 +0000 (0:00:01.421) 0:00:12.060 ************ 2025-05-19 21:52:20.260387 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:20.260480 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:20.260718 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:20.261349 | orchestrator | 2025-05-19 21:52:20.262074 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 21:52:20.262633 | orchestrator | Monday 19 May 2025 21:52:20 +0000 (0:00:00.158) 0:00:12.218 ************ 2025-05-19 21:52:20.397847 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:20.398291 | orchestrator | 2025-05-19 21:52:20.399172 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 21:52:20.400214 | orchestrator | Monday 19 May 2025 21:52:20 +0000 (0:00:00.138) 0:00:12.356 ************ 2025-05-19 21:52:20.734793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:20.735946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:20.736267 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:20.738402 | orchestrator | 2025-05-19 21:52:20.739141 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 21:52:20.739668 | orchestrator | Monday 19 May 2025 21:52:20 +0000 (0:00:00.335) 0:00:12.692 ************ 2025-05-19 21:52:20.869188 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:20.869730 | orchestrator | 2025-05-19 21:52:20.870635 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 21:52:20.871070 | orchestrator | Monday 19 May 2025 21:52:20 +0000 (0:00:00.135) 0:00:12.827 ************ 2025-05-19 21:52:21.012376 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:21.012545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:21.013825 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.014565 | orchestrator | 2025-05-19 21:52:21.015788 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 21:52:21.016721 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.142) 0:00:12.970 ************ 2025-05-19 21:52:21.149215 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.150009 | orchestrator | 2025-05-19 21:52:21.151053 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 21:52:21.151932 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.133) 0:00:13.103 ************ 2025-05-19 21:52:21.287990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:21.288090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:21.289006 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.290784 | orchestrator | 2025-05-19 21:52:21.290813 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 21:52:21.291863 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.141) 0:00:13.244 ************ 2025-05-19 21:52:21.430843 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:21.431044 | orchestrator | 2025-05-19 21:52:21.431989 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 21:52:21.432398 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.142) 0:00:13.387 ************ 2025-05-19 21:52:21.578228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:21.578986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:21.580038 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.580763 | orchestrator | 2025-05-19 21:52:21.581543 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 21:52:21.583507 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.149) 0:00:13.537 ************ 2025-05-19 21:52:21.736746 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:21.737829 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:21.740755 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.740791 | orchestrator | 2025-05-19 21:52:21.740805 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 21:52:21.741531 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.156) 0:00:13.693 ************ 2025-05-19 21:52:21.890725 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:21.890967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:21.891425 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:21.891742 | orchestrator | 2025-05-19 21:52:21.892445 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 21:52:21.892741 | orchestrator | Monday 19 May 2025 21:52:21 +0000 (0:00:00.155) 0:00:13.849 ************ 2025-05-19 21:52:22.027292 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:22.027978 | orchestrator | 2025-05-19 21:52:22.029161 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 21:52:22.029877 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.135) 0:00:13.984 ************ 2025-05-19 21:52:22.158686 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:22.159840 | orchestrator | 2025-05-19 21:52:22.160053 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 21:52:22.161192 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.129) 0:00:14.113 ************ 2025-05-19 21:52:22.303219 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:22.303717 | orchestrator | 2025-05-19 21:52:22.304655 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 21:52:22.305841 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.147) 0:00:14.261 ************ 2025-05-19 21:52:22.640415 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:52:22.640926 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 21:52:22.642108 | orchestrator | } 2025-05-19 21:52:22.643383 | orchestrator | 2025-05-19 21:52:22.644421 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 21:52:22.645479 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.336) 0:00:14.598 ************ 2025-05-19 21:52:22.773277 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:52:22.774362 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 21:52:22.775586 | orchestrator | } 2025-05-19 21:52:22.776806 | orchestrator | 2025-05-19 21:52:22.777361 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 21:52:22.778560 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.133) 0:00:14.731 ************ 2025-05-19 21:52:22.923784 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:52:22.924359 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 21:52:22.925442 | orchestrator | } 2025-05-19 21:52:22.926526 | orchestrator | 2025-05-19 21:52:22.927938 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 21:52:22.928590 | orchestrator | Monday 19 May 2025 21:52:22 +0000 (0:00:00.148) 0:00:14.880 ************ 2025-05-19 21:52:23.572544 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:23.572819 | orchestrator | 2025-05-19 21:52:23.574176 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 21:52:23.574690 | orchestrator | Monday 19 May 2025 21:52:23 +0000 (0:00:00.647) 0:00:15.527 ************ 2025-05-19 21:52:24.102179 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:24.102379 | orchestrator | 2025-05-19 21:52:24.102775 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 21:52:24.103536 | orchestrator | Monday 19 May 2025 21:52:24 +0000 (0:00:00.529) 0:00:16.057 ************ 2025-05-19 21:52:24.638602 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:24.639718 | orchestrator | 2025-05-19 21:52:24.640770 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 21:52:24.641086 | orchestrator | Monday 19 May 2025 21:52:24 +0000 (0:00:00.537) 0:00:16.595 ************ 2025-05-19 21:52:24.781226 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:24.781389 | orchestrator | 2025-05-19 21:52:24.781465 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 21:52:24.782181 | orchestrator | Monday 19 May 2025 21:52:24 +0000 (0:00:00.143) 0:00:16.738 ************ 2025-05-19 21:52:24.902708 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:24.902966 | orchestrator | 2025-05-19 21:52:24.904258 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 21:52:24.905192 | orchestrator | Monday 19 May 2025 21:52:24 +0000 (0:00:00.121) 0:00:16.860 ************ 2025-05-19 21:52:25.009721 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:25.010516 | orchestrator | 2025-05-19 21:52:25.011829 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 21:52:25.012775 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.106) 0:00:16.967 ************ 2025-05-19 21:52:25.142862 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:52:25.143407 | orchestrator |  "vgs_report": { 2025-05-19 21:52:25.144937 | orchestrator |  "vg": [] 2025-05-19 21:52:25.145982 | orchestrator |  } 2025-05-19 21:52:25.146975 | orchestrator | } 2025-05-19 21:52:25.148278 | orchestrator | 2025-05-19 21:52:25.150106 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 21:52:25.151332 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.133) 0:00:17.101 ************ 2025-05-19 21:52:25.268826 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:25.269528 | orchestrator | 2025-05-19 21:52:25.271497 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 21:52:25.271540 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.125) 0:00:17.226 ************ 2025-05-19 21:52:25.402147 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:25.402429 | orchestrator | 2025-05-19 21:52:25.403108 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 21:52:25.405213 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.132) 0:00:17.359 ************ 2025-05-19 21:52:25.722576 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:25.723001 | orchestrator | 2025-05-19 21:52:25.724258 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 21:52:25.729259 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.321) 0:00:17.680 ************ 2025-05-19 21:52:25.868016 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:25.868887 | orchestrator | 2025-05-19 21:52:25.869123 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 21:52:25.870691 | orchestrator | Monday 19 May 2025 21:52:25 +0000 (0:00:00.145) 0:00:17.826 ************ 2025-05-19 21:52:26.006856 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.007443 | orchestrator | 2025-05-19 21:52:26.008135 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 21:52:26.009553 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.137) 0:00:17.964 ************ 2025-05-19 21:52:26.141875 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.142089 | orchestrator | 2025-05-19 21:52:26.142739 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 21:52:26.144931 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.132) 0:00:18.096 ************ 2025-05-19 21:52:26.291719 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.292623 | orchestrator | 2025-05-19 21:52:26.293495 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 21:52:26.294624 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.153) 0:00:18.250 ************ 2025-05-19 21:52:26.434986 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.435192 | orchestrator | 2025-05-19 21:52:26.436212 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 21:52:26.437492 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.141) 0:00:18.391 ************ 2025-05-19 21:52:26.569692 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.570550 | orchestrator | 2025-05-19 21:52:26.571896 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 21:52:26.572777 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.135) 0:00:18.527 ************ 2025-05-19 21:52:26.727091 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.728836 | orchestrator | 2025-05-19 21:52:26.729689 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 21:52:26.730196 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.154) 0:00:18.681 ************ 2025-05-19 21:52:26.875208 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:26.875784 | orchestrator | 2025-05-19 21:52:26.876070 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 21:52:26.876718 | orchestrator | Monday 19 May 2025 21:52:26 +0000 (0:00:00.152) 0:00:18.834 ************ 2025-05-19 21:52:27.006279 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.007158 | orchestrator | 2025-05-19 21:52:27.009879 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 21:52:27.009912 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.130) 0:00:18.964 ************ 2025-05-19 21:52:27.136160 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.136387 | orchestrator | 2025-05-19 21:52:27.137389 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 21:52:27.139249 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.129) 0:00:19.093 ************ 2025-05-19 21:52:27.257131 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.260474 | orchestrator | 2025-05-19 21:52:27.260526 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 21:52:27.264638 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.120) 0:00:19.214 ************ 2025-05-19 21:52:27.615224 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:27.615972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:27.618869 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.619464 | orchestrator | 2025-05-19 21:52:27.620079 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 21:52:27.620579 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.357) 0:00:19.572 ************ 2025-05-19 21:52:27.802493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:27.802605 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:27.802688 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.804426 | orchestrator | 2025-05-19 21:52:27.804452 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 21:52:27.804466 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.184) 0:00:19.756 ************ 2025-05-19 21:52:27.958929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:27.960263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:27.960723 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:27.961013 | orchestrator | 2025-05-19 21:52:27.961552 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 21:52:27.961826 | orchestrator | Monday 19 May 2025 21:52:27 +0000 (0:00:00.161) 0:00:19.917 ************ 2025-05-19 21:52:28.127855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:28.128070 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:28.128643 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:28.129085 | orchestrator | 2025-05-19 21:52:28.129431 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 21:52:28.129806 | orchestrator | Monday 19 May 2025 21:52:28 +0000 (0:00:00.168) 0:00:20.085 ************ 2025-05-19 21:52:28.281266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:28.282183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:28.282884 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:28.284413 | orchestrator | 2025-05-19 21:52:28.285591 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 21:52:28.286355 | orchestrator | Monday 19 May 2025 21:52:28 +0000 (0:00:00.153) 0:00:20.239 ************ 2025-05-19 21:52:28.445832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:28.446067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:28.446169 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:28.446868 | orchestrator | 2025-05-19 21:52:28.447118 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 21:52:28.447751 | orchestrator | Monday 19 May 2025 21:52:28 +0000 (0:00:00.163) 0:00:20.402 ************ 2025-05-19 21:52:28.595680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:28.595794 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:28.595892 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:28.596733 | orchestrator | 2025-05-19 21:52:28.597374 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 21:52:28.598212 | orchestrator | Monday 19 May 2025 21:52:28 +0000 (0:00:00.151) 0:00:20.553 ************ 2025-05-19 21:52:28.766402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:28.767591 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:28.768792 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:28.769961 | orchestrator | 2025-05-19 21:52:28.770869 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 21:52:28.772077 | orchestrator | Monday 19 May 2025 21:52:28 +0000 (0:00:00.170) 0:00:20.723 ************ 2025-05-19 21:52:29.309995 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:29.310462 | orchestrator | 2025-05-19 21:52:29.310489 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 21:52:29.310498 | orchestrator | Monday 19 May 2025 21:52:29 +0000 (0:00:00.544) 0:00:21.268 ************ 2025-05-19 21:52:29.825950 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:29.826724 | orchestrator | 2025-05-19 21:52:29.828346 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 21:52:29.828384 | orchestrator | Monday 19 May 2025 21:52:29 +0000 (0:00:00.513) 0:00:21.782 ************ 2025-05-19 21:52:30.002814 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:52:30.002912 | orchestrator | 2025-05-19 21:52:30.003524 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 21:52:30.004128 | orchestrator | Monday 19 May 2025 21:52:29 +0000 (0:00:00.178) 0:00:21.961 ************ 2025-05-19 21:52:30.174470 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'vg_name': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}) 2025-05-19 21:52:30.174698 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'vg_name': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}) 2025-05-19 21:52:30.176156 | orchestrator | 2025-05-19 21:52:30.179249 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 21:52:30.179284 | orchestrator | Monday 19 May 2025 21:52:30 +0000 (0:00:00.171) 0:00:22.132 ************ 2025-05-19 21:52:30.529125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:30.529225 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:30.530365 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:30.530909 | orchestrator | 2025-05-19 21:52:30.532434 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 21:52:30.532806 | orchestrator | Monday 19 May 2025 21:52:30 +0000 (0:00:00.354) 0:00:22.486 ************ 2025-05-19 21:52:30.679687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:30.680067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:30.681286 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:30.682247 | orchestrator | 2025-05-19 21:52:30.683448 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 21:52:30.684378 | orchestrator | Monday 19 May 2025 21:52:30 +0000 (0:00:00.150) 0:00:22.637 ************ 2025-05-19 21:52:30.842710 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'})  2025-05-19 21:52:30.842825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'})  2025-05-19 21:52:30.844111 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:52:30.845368 | orchestrator | 2025-05-19 21:52:30.848523 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 21:52:30.850996 | orchestrator | Monday 19 May 2025 21:52:30 +0000 (0:00:00.162) 0:00:22.799 ************ 2025-05-19 21:52:31.140416 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 21:52:31.141509 | orchestrator |  "lvm_report": { 2025-05-19 21:52:31.142858 | orchestrator |  "lv": [ 2025-05-19 21:52:31.144499 | orchestrator |  { 2025-05-19 21:52:31.145069 | orchestrator |  "lv_name": "osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f", 2025-05-19 21:52:31.145869 | orchestrator |  "vg_name": "ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f" 2025-05-19 21:52:31.147223 | orchestrator |  }, 2025-05-19 21:52:31.148240 | orchestrator |  { 2025-05-19 21:52:31.148887 | orchestrator |  "lv_name": "osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9", 2025-05-19 21:52:31.149508 | orchestrator |  "vg_name": "ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9" 2025-05-19 21:52:31.150296 | orchestrator |  } 2025-05-19 21:52:31.151194 | orchestrator |  ], 2025-05-19 21:52:31.151827 | orchestrator |  "pv": [ 2025-05-19 21:52:31.152654 | orchestrator |  { 2025-05-19 21:52:31.153072 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 21:52:31.154472 | orchestrator |  "vg_name": "ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f" 2025-05-19 21:52:31.154657 | orchestrator |  }, 2025-05-19 21:52:31.155136 | orchestrator |  { 2025-05-19 21:52:31.155496 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 21:52:31.155825 | orchestrator |  "vg_name": "ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9" 2025-05-19 21:52:31.156166 | orchestrator |  } 2025-05-19 21:52:31.158099 | orchestrator |  ] 2025-05-19 21:52:31.158267 | orchestrator |  } 2025-05-19 21:52:31.158664 | orchestrator | } 2025-05-19 21:52:31.158998 | orchestrator | 2025-05-19 21:52:31.159623 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 21:52:31.160087 | orchestrator | 2025-05-19 21:52:31.161679 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:52:31.164910 | orchestrator | Monday 19 May 2025 21:52:31 +0000 (0:00:00.298) 0:00:23.098 ************ 2025-05-19 21:52:31.392087 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-19 21:52:31.392343 | orchestrator | 2025-05-19 21:52:31.393395 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:52:31.393666 | orchestrator | Monday 19 May 2025 21:52:31 +0000 (0:00:00.249) 0:00:23.347 ************ 2025-05-19 21:52:31.638071 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:31.638575 | orchestrator | 2025-05-19 21:52:31.639052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:31.639455 | orchestrator | Monday 19 May 2025 21:52:31 +0000 (0:00:00.246) 0:00:23.594 ************ 2025-05-19 21:52:32.059226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-19 21:52:32.059919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-19 21:52:32.061592 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-19 21:52:32.061886 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-19 21:52:32.062990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-19 21:52:32.063777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-19 21:52:32.064523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-19 21:52:32.065557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-19 21:52:32.066126 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-19 21:52:32.066938 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-19 21:52:32.067980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-19 21:52:32.068446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-19 21:52:32.069237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-19 21:52:32.070460 | orchestrator | 2025-05-19 21:52:32.071478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:32.072285 | orchestrator | Monday 19 May 2025 21:52:32 +0000 (0:00:00.420) 0:00:24.015 ************ 2025-05-19 21:52:32.259085 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:32.262504 | orchestrator | 2025-05-19 21:52:32.264206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:32.265127 | orchestrator | Monday 19 May 2025 21:52:32 +0000 (0:00:00.201) 0:00:24.217 ************ 2025-05-19 21:52:32.451461 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:32.451754 | orchestrator | 2025-05-19 21:52:32.452187 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:32.453037 | orchestrator | Monday 19 May 2025 21:52:32 +0000 (0:00:00.191) 0:00:24.409 ************ 2025-05-19 21:52:33.079404 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:33.079699 | orchestrator | 2025-05-19 21:52:33.080399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:33.081092 | orchestrator | Monday 19 May 2025 21:52:33 +0000 (0:00:00.627) 0:00:25.036 ************ 2025-05-19 21:52:33.315883 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:33.320262 | orchestrator | 2025-05-19 21:52:33.320297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:33.320356 | orchestrator | Monday 19 May 2025 21:52:33 +0000 (0:00:00.230) 0:00:25.267 ************ 2025-05-19 21:52:33.556183 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:33.556564 | orchestrator | 2025-05-19 21:52:33.557424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:33.558726 | orchestrator | Monday 19 May 2025 21:52:33 +0000 (0:00:00.243) 0:00:25.510 ************ 2025-05-19 21:52:33.776409 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:33.776510 | orchestrator | 2025-05-19 21:52:33.777642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:33.777733 | orchestrator | Monday 19 May 2025 21:52:33 +0000 (0:00:00.223) 0:00:25.734 ************ 2025-05-19 21:52:33.990176 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:33.993113 | orchestrator | 2025-05-19 21:52:33.993211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:33.994468 | orchestrator | Monday 19 May 2025 21:52:33 +0000 (0:00:00.207) 0:00:25.942 ************ 2025-05-19 21:52:34.212573 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:34.213972 | orchestrator | 2025-05-19 21:52:34.214481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:34.215264 | orchestrator | Monday 19 May 2025 21:52:34 +0000 (0:00:00.225) 0:00:26.168 ************ 2025-05-19 21:52:34.674351 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177) 2025-05-19 21:52:34.675233 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177) 2025-05-19 21:52:34.675940 | orchestrator | 2025-05-19 21:52:34.676872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:34.678381 | orchestrator | Monday 19 May 2025 21:52:34 +0000 (0:00:00.463) 0:00:26.632 ************ 2025-05-19 21:52:35.132722 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8) 2025-05-19 21:52:35.132867 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8) 2025-05-19 21:52:35.133499 | orchestrator | 2025-05-19 21:52:35.134703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:35.135348 | orchestrator | Monday 19 May 2025 21:52:35 +0000 (0:00:00.450) 0:00:27.082 ************ 2025-05-19 21:52:35.541367 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305) 2025-05-19 21:52:35.541566 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305) 2025-05-19 21:52:35.542240 | orchestrator | 2025-05-19 21:52:35.542765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:35.543728 | orchestrator | Monday 19 May 2025 21:52:35 +0000 (0:00:00.416) 0:00:27.499 ************ 2025-05-19 21:52:35.958418 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3) 2025-05-19 21:52:35.958544 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3) 2025-05-19 21:52:35.959405 | orchestrator | 2025-05-19 21:52:35.959685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:35.960628 | orchestrator | Monday 19 May 2025 21:52:35 +0000 (0:00:00.414) 0:00:27.913 ************ 2025-05-19 21:52:36.285941 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:52:36.286116 | orchestrator | 2025-05-19 21:52:36.287442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:36.288496 | orchestrator | Monday 19 May 2025 21:52:36 +0000 (0:00:00.330) 0:00:28.244 ************ 2025-05-19 21:52:36.904673 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-19 21:52:36.905006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-19 21:52:36.905652 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-19 21:52:36.907733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-19 21:52:36.908337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-19 21:52:36.909409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-19 21:52:36.910404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-19 21:52:36.910970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-19 21:52:36.911470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-19 21:52:36.911978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-19 21:52:36.912407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-19 21:52:36.913267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-19 21:52:36.913992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-19 21:52:36.914608 | orchestrator | 2025-05-19 21:52:36.915048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:36.915575 | orchestrator | Monday 19 May 2025 21:52:36 +0000 (0:00:00.613) 0:00:28.857 ************ 2025-05-19 21:52:37.118803 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:37.119242 | orchestrator | 2025-05-19 21:52:37.120251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:37.121079 | orchestrator | Monday 19 May 2025 21:52:37 +0000 (0:00:00.218) 0:00:29.076 ************ 2025-05-19 21:52:37.314618 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:37.314721 | orchestrator | 2025-05-19 21:52:37.315545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:37.316334 | orchestrator | Monday 19 May 2025 21:52:37 +0000 (0:00:00.196) 0:00:29.272 ************ 2025-05-19 21:52:37.532782 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:37.532973 | orchestrator | 2025-05-19 21:52:37.533593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:37.534528 | orchestrator | Monday 19 May 2025 21:52:37 +0000 (0:00:00.218) 0:00:29.491 ************ 2025-05-19 21:52:37.756884 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:37.757203 | orchestrator | 2025-05-19 21:52:37.757841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:37.758682 | orchestrator | Monday 19 May 2025 21:52:37 +0000 (0:00:00.218) 0:00:29.709 ************ 2025-05-19 21:52:37.957965 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:37.958234 | orchestrator | 2025-05-19 21:52:37.959089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:37.959588 | orchestrator | Monday 19 May 2025 21:52:37 +0000 (0:00:00.206) 0:00:29.916 ************ 2025-05-19 21:52:38.156397 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:38.156814 | orchestrator | 2025-05-19 21:52:38.157330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:38.158492 | orchestrator | Monday 19 May 2025 21:52:38 +0000 (0:00:00.198) 0:00:30.114 ************ 2025-05-19 21:52:38.349064 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:38.349459 | orchestrator | 2025-05-19 21:52:38.350454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:38.350892 | orchestrator | Monday 19 May 2025 21:52:38 +0000 (0:00:00.193) 0:00:30.307 ************ 2025-05-19 21:52:38.551286 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:38.553929 | orchestrator | 2025-05-19 21:52:38.555377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:38.555413 | orchestrator | Monday 19 May 2025 21:52:38 +0000 (0:00:00.200) 0:00:30.507 ************ 2025-05-19 21:52:39.373473 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-19 21:52:39.374364 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-19 21:52:39.375126 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-19 21:52:39.376027 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-19 21:52:39.376808 | orchestrator | 2025-05-19 21:52:39.377590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:39.378553 | orchestrator | Monday 19 May 2025 21:52:39 +0000 (0:00:00.822) 0:00:31.330 ************ 2025-05-19 21:52:39.555211 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:39.555773 | orchestrator | 2025-05-19 21:52:39.556755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:39.557667 | orchestrator | Monday 19 May 2025 21:52:39 +0000 (0:00:00.184) 0:00:31.514 ************ 2025-05-19 21:52:39.753270 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:39.753510 | orchestrator | 2025-05-19 21:52:39.754132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:39.754559 | orchestrator | Monday 19 May 2025 21:52:39 +0000 (0:00:00.197) 0:00:31.712 ************ 2025-05-19 21:52:40.341555 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:40.341925 | orchestrator | 2025-05-19 21:52:40.342808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:40.345571 | orchestrator | Monday 19 May 2025 21:52:40 +0000 (0:00:00.586) 0:00:32.298 ************ 2025-05-19 21:52:40.542236 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:40.542867 | orchestrator | 2025-05-19 21:52:40.543831 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 21:52:40.544504 | orchestrator | Monday 19 May 2025 21:52:40 +0000 (0:00:00.202) 0:00:32.501 ************ 2025-05-19 21:52:40.678757 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:40.679081 | orchestrator | 2025-05-19 21:52:40.679959 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 21:52:40.680646 | orchestrator | Monday 19 May 2025 21:52:40 +0000 (0:00:00.136) 0:00:32.637 ************ 2025-05-19 21:52:40.864012 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd2161015-9b2d-55ef-85cd-b20f941db83a'}}) 2025-05-19 21:52:40.864114 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '73ec3cc1-218e-51bb-a362-2e871742ea52'}}) 2025-05-19 21:52:40.864127 | orchestrator | 2025-05-19 21:52:40.864140 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 21:52:40.864879 | orchestrator | Monday 19 May 2025 21:52:40 +0000 (0:00:00.179) 0:00:32.816 ************ 2025-05-19 21:52:42.745788 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'}) 2025-05-19 21:52:42.745927 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'}) 2025-05-19 21:52:42.748030 | orchestrator | 2025-05-19 21:52:42.748825 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 21:52:42.749785 | orchestrator | Monday 19 May 2025 21:52:42 +0000 (0:00:01.881) 0:00:34.698 ************ 2025-05-19 21:52:42.885838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:42.885970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:42.886168 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:42.887010 | orchestrator | 2025-05-19 21:52:42.887036 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 21:52:42.887388 | orchestrator | Monday 19 May 2025 21:52:42 +0000 (0:00:00.146) 0:00:34.844 ************ 2025-05-19 21:52:44.173583 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'}) 2025-05-19 21:52:44.173928 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'}) 2025-05-19 21:52:44.174662 | orchestrator | 2025-05-19 21:52:44.175641 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 21:52:44.176384 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:01.283) 0:00:36.127 ************ 2025-05-19 21:52:44.321510 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:44.321749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:44.322085 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:44.323138 | orchestrator | 2025-05-19 21:52:44.323875 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 21:52:44.324722 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:00.151) 0:00:36.278 ************ 2025-05-19 21:52:44.455506 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:44.455614 | orchestrator | 2025-05-19 21:52:44.456423 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 21:52:44.457366 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:00.134) 0:00:36.413 ************ 2025-05-19 21:52:44.603034 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:44.603284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:44.604570 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:44.604674 | orchestrator | 2025-05-19 21:52:44.605532 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 21:52:44.606219 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:00.148) 0:00:36.561 ************ 2025-05-19 21:52:44.720077 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:44.720799 | orchestrator | 2025-05-19 21:52:44.721758 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 21:52:44.722510 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:00.116) 0:00:36.678 ************ 2025-05-19 21:52:44.870604 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:44.874596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:44.876771 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:44.877525 | orchestrator | 2025-05-19 21:52:44.881780 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 21:52:44.887730 | orchestrator | Monday 19 May 2025 21:52:44 +0000 (0:00:00.149) 0:00:36.827 ************ 2025-05-19 21:52:45.207027 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:45.207138 | orchestrator | 2025-05-19 21:52:45.207592 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 21:52:45.208531 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.337) 0:00:37.164 ************ 2025-05-19 21:52:45.358687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:45.358796 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:45.359210 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:45.360246 | orchestrator | 2025-05-19 21:52:45.360928 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 21:52:45.361418 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.151) 0:00:37.315 ************ 2025-05-19 21:52:45.502186 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:45.502514 | orchestrator | 2025-05-19 21:52:45.503139 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 21:52:45.503785 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.140) 0:00:37.456 ************ 2025-05-19 21:52:45.654617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:45.655028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:45.655962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:45.656825 | orchestrator | 2025-05-19 21:52:45.657707 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 21:52:45.658425 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.155) 0:00:37.611 ************ 2025-05-19 21:52:45.821561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:45.822437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:45.825145 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:45.825986 | orchestrator | 2025-05-19 21:52:45.826843 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 21:52:45.828040 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.166) 0:00:37.778 ************ 2025-05-19 21:52:45.968803 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:45.968908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:45.971722 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:45.971751 | orchestrator | 2025-05-19 21:52:45.971765 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 21:52:45.972077 | orchestrator | Monday 19 May 2025 21:52:45 +0000 (0:00:00.145) 0:00:37.923 ************ 2025-05-19 21:52:46.099845 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:46.102062 | orchestrator | 2025-05-19 21:52:46.102987 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 21:52:46.103599 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.133) 0:00:38.057 ************ 2025-05-19 21:52:46.227854 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:46.228070 | orchestrator | 2025-05-19 21:52:46.228689 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 21:52:46.229169 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.126) 0:00:38.183 ************ 2025-05-19 21:52:46.352078 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:46.352694 | orchestrator | 2025-05-19 21:52:46.353483 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 21:52:46.354061 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.125) 0:00:38.309 ************ 2025-05-19 21:52:46.485674 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:52:46.486052 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 21:52:46.486619 | orchestrator | } 2025-05-19 21:52:46.487391 | orchestrator | 2025-05-19 21:52:46.488584 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 21:52:46.488641 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.134) 0:00:38.443 ************ 2025-05-19 21:52:46.619883 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:52:46.620174 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 21:52:46.622529 | orchestrator | } 2025-05-19 21:52:46.624149 | orchestrator | 2025-05-19 21:52:46.624425 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 21:52:46.625154 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.134) 0:00:38.578 ************ 2025-05-19 21:52:46.753513 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:52:46.754414 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 21:52:46.754748 | orchestrator | } 2025-05-19 21:52:46.755560 | orchestrator | 2025-05-19 21:52:46.756971 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 21:52:46.756980 | orchestrator | Monday 19 May 2025 21:52:46 +0000 (0:00:00.133) 0:00:38.711 ************ 2025-05-19 21:52:47.472893 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:47.473036 | orchestrator | 2025-05-19 21:52:47.473117 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 21:52:47.473134 | orchestrator | Monday 19 May 2025 21:52:47 +0000 (0:00:00.707) 0:00:39.418 ************ 2025-05-19 21:52:47.990352 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:47.991253 | orchestrator | 2025-05-19 21:52:47.991520 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 21:52:47.992174 | orchestrator | Monday 19 May 2025 21:52:47 +0000 (0:00:00.528) 0:00:39.947 ************ 2025-05-19 21:52:48.505730 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:48.507177 | orchestrator | 2025-05-19 21:52:48.508254 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 21:52:48.508579 | orchestrator | Monday 19 May 2025 21:52:48 +0000 (0:00:00.513) 0:00:40.461 ************ 2025-05-19 21:52:48.656173 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:48.656784 | orchestrator | 2025-05-19 21:52:48.657981 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 21:52:48.658596 | orchestrator | Monday 19 May 2025 21:52:48 +0000 (0:00:00.151) 0:00:40.613 ************ 2025-05-19 21:52:48.762149 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:48.763448 | orchestrator | 2025-05-19 21:52:48.764292 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 21:52:48.765345 | orchestrator | Monday 19 May 2025 21:52:48 +0000 (0:00:00.106) 0:00:40.719 ************ 2025-05-19 21:52:48.868197 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:48.868459 | orchestrator | 2025-05-19 21:52:48.869549 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 21:52:48.871034 | orchestrator | Monday 19 May 2025 21:52:48 +0000 (0:00:00.106) 0:00:40.825 ************ 2025-05-19 21:52:49.001847 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:52:49.002267 | orchestrator |  "vgs_report": { 2025-05-19 21:52:49.003877 | orchestrator |  "vg": [] 2025-05-19 21:52:49.005190 | orchestrator |  } 2025-05-19 21:52:49.006871 | orchestrator | } 2025-05-19 21:52:49.007616 | orchestrator | 2025-05-19 21:52:49.007897 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 21:52:49.008372 | orchestrator | Monday 19 May 2025 21:52:48 +0000 (0:00:00.133) 0:00:40.959 ************ 2025-05-19 21:52:49.128028 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.128540 | orchestrator | 2025-05-19 21:52:49.129524 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 21:52:49.130528 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.126) 0:00:41.086 ************ 2025-05-19 21:52:49.258641 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.258824 | orchestrator | 2025-05-19 21:52:49.259902 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 21:52:49.261534 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.129) 0:00:41.216 ************ 2025-05-19 21:52:49.393399 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.393828 | orchestrator | 2025-05-19 21:52:49.395258 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 21:52:49.395668 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.134) 0:00:41.350 ************ 2025-05-19 21:52:49.519677 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.519764 | orchestrator | 2025-05-19 21:52:49.521203 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 21:52:49.522146 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.128) 0:00:41.478 ************ 2025-05-19 21:52:49.646760 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.646987 | orchestrator | 2025-05-19 21:52:49.648107 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 21:52:49.649287 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.127) 0:00:41.605 ************ 2025-05-19 21:52:49.968788 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:49.968897 | orchestrator | 2025-05-19 21:52:49.968914 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 21:52:49.971004 | orchestrator | Monday 19 May 2025 21:52:49 +0000 (0:00:00.318) 0:00:41.923 ************ 2025-05-19 21:52:50.097250 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.098098 | orchestrator | 2025-05-19 21:52:50.099393 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 21:52:50.100073 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.132) 0:00:42.055 ************ 2025-05-19 21:52:50.234793 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.235684 | orchestrator | 2025-05-19 21:52:50.236724 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 21:52:50.237578 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.137) 0:00:42.193 ************ 2025-05-19 21:52:50.375444 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.376466 | orchestrator | 2025-05-19 21:52:50.377443 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 21:52:50.378374 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.140) 0:00:42.334 ************ 2025-05-19 21:52:50.508865 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.510118 | orchestrator | 2025-05-19 21:52:50.511375 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 21:52:50.512234 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.133) 0:00:42.467 ************ 2025-05-19 21:52:50.637854 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.638261 | orchestrator | 2025-05-19 21:52:50.639415 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 21:52:50.641497 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.128) 0:00:42.596 ************ 2025-05-19 21:52:50.759926 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.760396 | orchestrator | 2025-05-19 21:52:50.762070 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 21:52:50.762280 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.122) 0:00:42.718 ************ 2025-05-19 21:52:50.881469 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:50.883227 | orchestrator | 2025-05-19 21:52:50.884621 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 21:52:50.885333 | orchestrator | Monday 19 May 2025 21:52:50 +0000 (0:00:00.121) 0:00:42.839 ************ 2025-05-19 21:52:51.019694 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.020091 | orchestrator | 2025-05-19 21:52:51.022701 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 21:52:51.023012 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.136) 0:00:42.976 ************ 2025-05-19 21:52:51.165546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:51.166496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:51.167184 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.168342 | orchestrator | 2025-05-19 21:52:51.169149 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 21:52:51.169970 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.147) 0:00:43.123 ************ 2025-05-19 21:52:51.311642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:51.312159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:51.313631 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.314400 | orchestrator | 2025-05-19 21:52:51.315408 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 21:52:51.317040 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.145) 0:00:43.269 ************ 2025-05-19 21:52:51.460359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:51.460693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:51.461638 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.462849 | orchestrator | 2025-05-19 21:52:51.463498 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 21:52:51.464358 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.150) 0:00:43.419 ************ 2025-05-19 21:52:51.785692 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:51.786570 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:51.787014 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.788891 | orchestrator | 2025-05-19 21:52:51.789896 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 21:52:51.790939 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.324) 0:00:43.744 ************ 2025-05-19 21:52:51.933564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:51.934399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:51.938332 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:51.939161 | orchestrator | 2025-05-19 21:52:51.939996 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 21:52:51.940788 | orchestrator | Monday 19 May 2025 21:52:51 +0000 (0:00:00.147) 0:00:43.891 ************ 2025-05-19 21:52:52.079030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:52.079590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:52.080227 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:52.081073 | orchestrator | 2025-05-19 21:52:52.081908 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 21:52:52.082532 | orchestrator | Monday 19 May 2025 21:52:52 +0000 (0:00:00.146) 0:00:44.038 ************ 2025-05-19 21:52:52.226582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:52.227602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:52.228519 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:52.230456 | orchestrator | 2025-05-19 21:52:52.230812 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 21:52:52.231701 | orchestrator | Monday 19 May 2025 21:52:52 +0000 (0:00:00.146) 0:00:44.184 ************ 2025-05-19 21:52:52.377821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:52.377919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:52.379146 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:52.381651 | orchestrator | 2025-05-19 21:52:52.382289 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 21:52:52.382802 | orchestrator | Monday 19 May 2025 21:52:52 +0000 (0:00:00.151) 0:00:44.335 ************ 2025-05-19 21:52:52.882770 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:52.883395 | orchestrator | 2025-05-19 21:52:52.883938 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 21:52:52.885069 | orchestrator | Monday 19 May 2025 21:52:52 +0000 (0:00:00.504) 0:00:44.840 ************ 2025-05-19 21:52:53.414678 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:53.415742 | orchestrator | 2025-05-19 21:52:53.416988 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 21:52:53.417381 | orchestrator | Monday 19 May 2025 21:52:53 +0000 (0:00:00.530) 0:00:45.371 ************ 2025-05-19 21:52:53.559190 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:52:53.561972 | orchestrator | 2025-05-19 21:52:53.562704 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 21:52:53.563503 | orchestrator | Monday 19 May 2025 21:52:53 +0000 (0:00:00.145) 0:00:45.516 ************ 2025-05-19 21:52:53.724459 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'vg_name': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'}) 2025-05-19 21:52:53.726144 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'vg_name': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'}) 2025-05-19 21:52:53.726257 | orchestrator | 2025-05-19 21:52:53.727377 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 21:52:53.728478 | orchestrator | Monday 19 May 2025 21:52:53 +0000 (0:00:00.165) 0:00:45.682 ************ 2025-05-19 21:52:53.883568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:53.883745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:53.884770 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:53.885337 | orchestrator | 2025-05-19 21:52:53.886797 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 21:52:53.886822 | orchestrator | Monday 19 May 2025 21:52:53 +0000 (0:00:00.158) 0:00:45.841 ************ 2025-05-19 21:52:54.039462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:54.039546 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:54.040375 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:54.040995 | orchestrator | 2025-05-19 21:52:54.041905 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 21:52:54.042398 | orchestrator | Monday 19 May 2025 21:52:54 +0000 (0:00:00.154) 0:00:45.996 ************ 2025-05-19 21:52:54.187814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'})  2025-05-19 21:52:54.187941 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'})  2025-05-19 21:52:54.188350 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:52:54.189082 | orchestrator | 2025-05-19 21:52:54.189558 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 21:52:54.189917 | orchestrator | Monday 19 May 2025 21:52:54 +0000 (0:00:00.149) 0:00:46.146 ************ 2025-05-19 21:52:54.696505 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 21:52:54.696616 | orchestrator |  "lvm_report": { 2025-05-19 21:52:54.696689 | orchestrator |  "lv": [ 2025-05-19 21:52:54.696809 | orchestrator |  { 2025-05-19 21:52:54.701915 | orchestrator |  "lv_name": "osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52", 2025-05-19 21:52:54.701962 | orchestrator |  "vg_name": "ceph-73ec3cc1-218e-51bb-a362-2e871742ea52" 2025-05-19 21:52:54.702007 | orchestrator |  }, 2025-05-19 21:52:54.702103 | orchestrator |  { 2025-05-19 21:52:54.702166 | orchestrator |  "lv_name": "osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a", 2025-05-19 21:52:54.703864 | orchestrator |  "vg_name": "ceph-d2161015-9b2d-55ef-85cd-b20f941db83a" 2025-05-19 21:52:54.704163 | orchestrator |  } 2025-05-19 21:52:54.705393 | orchestrator |  ], 2025-05-19 21:52:54.707543 | orchestrator |  "pv": [ 2025-05-19 21:52:54.708803 | orchestrator |  { 2025-05-19 21:52:54.709873 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 21:52:54.710084 | orchestrator |  "vg_name": "ceph-d2161015-9b2d-55ef-85cd-b20f941db83a" 2025-05-19 21:52:54.711373 | orchestrator |  }, 2025-05-19 21:52:54.711397 | orchestrator |  { 2025-05-19 21:52:54.711409 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 21:52:54.711625 | orchestrator |  "vg_name": "ceph-73ec3cc1-218e-51bb-a362-2e871742ea52" 2025-05-19 21:52:54.712542 | orchestrator |  } 2025-05-19 21:52:54.712987 | orchestrator |  ] 2025-05-19 21:52:54.713350 | orchestrator |  } 2025-05-19 21:52:54.713800 | orchestrator | } 2025-05-19 21:52:54.714338 | orchestrator | 2025-05-19 21:52:54.714759 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-19 21:52:54.715116 | orchestrator | 2025-05-19 21:52:54.715652 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 21:52:54.716265 | orchestrator | Monday 19 May 2025 21:52:54 +0000 (0:00:00.508) 0:00:46.654 ************ 2025-05-19 21:52:54.925696 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-19 21:52:54.925880 | orchestrator | 2025-05-19 21:52:54.926458 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-19 21:52:54.927199 | orchestrator | Monday 19 May 2025 21:52:54 +0000 (0:00:00.229) 0:00:46.884 ************ 2025-05-19 21:52:55.153225 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:52:55.153510 | orchestrator | 2025-05-19 21:52:55.154560 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:55.156378 | orchestrator | Monday 19 May 2025 21:52:55 +0000 (0:00:00.225) 0:00:47.110 ************ 2025-05-19 21:52:55.536966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-19 21:52:55.537536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-19 21:52:55.539467 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-19 21:52:55.539743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-19 21:52:55.540956 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-19 21:52:55.541927 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-19 21:52:55.542951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-19 21:52:55.543798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-19 21:52:55.544447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-19 21:52:55.545117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-19 21:52:55.545627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-19 21:52:55.546356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-19 21:52:55.546934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-19 21:52:55.547535 | orchestrator | 2025-05-19 21:52:55.548082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:55.548815 | orchestrator | Monday 19 May 2025 21:52:55 +0000 (0:00:00.383) 0:00:47.493 ************ 2025-05-19 21:52:55.713605 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:55.714343 | orchestrator | 2025-05-19 21:52:55.715016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:55.715433 | orchestrator | Monday 19 May 2025 21:52:55 +0000 (0:00:00.177) 0:00:47.671 ************ 2025-05-19 21:52:55.902080 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:55.902291 | orchestrator | 2025-05-19 21:52:55.903107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:55.903652 | orchestrator | Monday 19 May 2025 21:52:55 +0000 (0:00:00.188) 0:00:47.860 ************ 2025-05-19 21:52:56.083679 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:56.083883 | orchestrator | 2025-05-19 21:52:56.084850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:56.086420 | orchestrator | Monday 19 May 2025 21:52:56 +0000 (0:00:00.180) 0:00:48.041 ************ 2025-05-19 21:52:56.273116 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:56.273206 | orchestrator | 2025-05-19 21:52:56.273769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:56.274155 | orchestrator | Monday 19 May 2025 21:52:56 +0000 (0:00:00.188) 0:00:48.230 ************ 2025-05-19 21:52:56.458833 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:56.459043 | orchestrator | 2025-05-19 21:52:56.459940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:56.460230 | orchestrator | Monday 19 May 2025 21:52:56 +0000 (0:00:00.187) 0:00:48.417 ************ 2025-05-19 21:52:57.023415 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:57.024145 | orchestrator | 2025-05-19 21:52:57.024819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:57.025990 | orchestrator | Monday 19 May 2025 21:52:57 +0000 (0:00:00.563) 0:00:48.980 ************ 2025-05-19 21:52:57.225630 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:57.226284 | orchestrator | 2025-05-19 21:52:57.226885 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:57.227778 | orchestrator | Monday 19 May 2025 21:52:57 +0000 (0:00:00.203) 0:00:49.184 ************ 2025-05-19 21:52:57.427897 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:52:57.428069 | orchestrator | 2025-05-19 21:52:57.429319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:57.430167 | orchestrator | Monday 19 May 2025 21:52:57 +0000 (0:00:00.201) 0:00:49.385 ************ 2025-05-19 21:52:57.829604 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397) 2025-05-19 21:52:57.830101 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397) 2025-05-19 21:52:57.830678 | orchestrator | 2025-05-19 21:52:57.831514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:57.834370 | orchestrator | Monday 19 May 2025 21:52:57 +0000 (0:00:00.401) 0:00:49.787 ************ 2025-05-19 21:52:58.250895 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70) 2025-05-19 21:52:58.251207 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70) 2025-05-19 21:52:58.252826 | orchestrator | 2025-05-19 21:52:58.252854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:58.252867 | orchestrator | Monday 19 May 2025 21:52:58 +0000 (0:00:00.418) 0:00:50.206 ************ 2025-05-19 21:52:58.660934 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6) 2025-05-19 21:52:58.661048 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6) 2025-05-19 21:52:58.665907 | orchestrator | 2025-05-19 21:52:58.666740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:58.668350 | orchestrator | Monday 19 May 2025 21:52:58 +0000 (0:00:00.411) 0:00:50.617 ************ 2025-05-19 21:52:59.079695 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba) 2025-05-19 21:52:59.080801 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba) 2025-05-19 21:52:59.081892 | orchestrator | 2025-05-19 21:52:59.083797 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-19 21:52:59.084528 | orchestrator | Monday 19 May 2025 21:52:59 +0000 (0:00:00.418) 0:00:51.036 ************ 2025-05-19 21:52:59.417048 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-19 21:52:59.417629 | orchestrator | 2025-05-19 21:52:59.418265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:59.418564 | orchestrator | Monday 19 May 2025 21:52:59 +0000 (0:00:00.339) 0:00:51.375 ************ 2025-05-19 21:52:59.813161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-19 21:52:59.813814 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-19 21:52:59.815083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-19 21:52:59.816644 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-19 21:52:59.817429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-19 21:52:59.817453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-19 21:52:59.818257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-19 21:52:59.819242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-19 21:52:59.820409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-19 21:52:59.821469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-19 21:52:59.822152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-19 21:52:59.822819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-19 21:52:59.823492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-19 21:52:59.824215 | orchestrator | 2025-05-19 21:52:59.824913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:52:59.825347 | orchestrator | Monday 19 May 2025 21:52:59 +0000 (0:00:00.395) 0:00:51.770 ************ 2025-05-19 21:53:00.002589 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:00.002929 | orchestrator | 2025-05-19 21:53:00.003870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:00.004485 | orchestrator | Monday 19 May 2025 21:52:59 +0000 (0:00:00.190) 0:00:51.961 ************ 2025-05-19 21:53:00.192813 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:00.193622 | orchestrator | 2025-05-19 21:53:00.194698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:00.195597 | orchestrator | Monday 19 May 2025 21:53:00 +0000 (0:00:00.188) 0:00:52.150 ************ 2025-05-19 21:53:00.790288 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:00.790800 | orchestrator | 2025-05-19 21:53:00.791946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:00.792603 | orchestrator | Monday 19 May 2025 21:53:00 +0000 (0:00:00.598) 0:00:52.748 ************ 2025-05-19 21:53:00.985734 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:00.985837 | orchestrator | 2025-05-19 21:53:00.986158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:00.987266 | orchestrator | Monday 19 May 2025 21:53:00 +0000 (0:00:00.194) 0:00:52.943 ************ 2025-05-19 21:53:01.170650 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:01.170761 | orchestrator | 2025-05-19 21:53:01.171372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:01.172193 | orchestrator | Monday 19 May 2025 21:53:01 +0000 (0:00:00.184) 0:00:53.127 ************ 2025-05-19 21:53:01.374791 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:01.375013 | orchestrator | 2025-05-19 21:53:01.376588 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:01.377384 | orchestrator | Monday 19 May 2025 21:53:01 +0000 (0:00:00.205) 0:00:53.332 ************ 2025-05-19 21:53:01.567364 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:01.568759 | orchestrator | 2025-05-19 21:53:01.569511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:01.570411 | orchestrator | Monday 19 May 2025 21:53:01 +0000 (0:00:00.193) 0:00:53.526 ************ 2025-05-19 21:53:01.764754 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:01.767661 | orchestrator | 2025-05-19 21:53:01.767694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:01.767706 | orchestrator | Monday 19 May 2025 21:53:01 +0000 (0:00:00.197) 0:00:53.723 ************ 2025-05-19 21:53:02.368285 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-19 21:53:02.369072 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-19 21:53:02.370057 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-19 21:53:02.371679 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-19 21:53:02.371771 | orchestrator | 2025-05-19 21:53:02.372363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:02.372640 | orchestrator | Monday 19 May 2025 21:53:02 +0000 (0:00:00.602) 0:00:54.325 ************ 2025-05-19 21:53:02.564915 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:02.565536 | orchestrator | 2025-05-19 21:53:02.566161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:02.567046 | orchestrator | Monday 19 May 2025 21:53:02 +0000 (0:00:00.196) 0:00:54.522 ************ 2025-05-19 21:53:02.767412 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:02.769073 | orchestrator | 2025-05-19 21:53:02.769972 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:02.771279 | orchestrator | Monday 19 May 2025 21:53:02 +0000 (0:00:00.203) 0:00:54.726 ************ 2025-05-19 21:53:02.961388 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:02.961491 | orchestrator | 2025-05-19 21:53:02.962490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-19 21:53:02.962718 | orchestrator | Monday 19 May 2025 21:53:02 +0000 (0:00:00.193) 0:00:54.919 ************ 2025-05-19 21:53:03.152128 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:03.152832 | orchestrator | 2025-05-19 21:53:03.153610 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-19 21:53:03.156426 | orchestrator | Monday 19 May 2025 21:53:03 +0000 (0:00:00.190) 0:00:55.109 ************ 2025-05-19 21:53:03.470107 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:03.471170 | orchestrator | 2025-05-19 21:53:03.472730 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-19 21:53:03.473807 | orchestrator | Monday 19 May 2025 21:53:03 +0000 (0:00:00.318) 0:00:55.428 ************ 2025-05-19 21:53:03.652901 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd6c00661-cf2a-5067-a507-d2ca4df6447b'}}) 2025-05-19 21:53:03.653890 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}}) 2025-05-19 21:53:03.655460 | orchestrator | 2025-05-19 21:53:03.656520 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-19 21:53:03.657517 | orchestrator | Monday 19 May 2025 21:53:03 +0000 (0:00:00.182) 0:00:55.611 ************ 2025-05-19 21:53:05.453096 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'}) 2025-05-19 21:53:05.453212 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}) 2025-05-19 21:53:05.453424 | orchestrator | 2025-05-19 21:53:05.454193 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-19 21:53:05.455047 | orchestrator | Monday 19 May 2025 21:53:05 +0000 (0:00:01.799) 0:00:57.410 ************ 2025-05-19 21:53:05.595883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:05.595985 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:05.596052 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:05.596577 | orchestrator | 2025-05-19 21:53:05.597506 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-19 21:53:05.597923 | orchestrator | Monday 19 May 2025 21:53:05 +0000 (0:00:00.142) 0:00:57.553 ************ 2025-05-19 21:53:06.900064 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'}) 2025-05-19 21:53:06.900176 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}) 2025-05-19 21:53:06.900192 | orchestrator | 2025-05-19 21:53:06.900205 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-19 21:53:06.900284 | orchestrator | Monday 19 May 2025 21:53:06 +0000 (0:00:01.303) 0:00:58.856 ************ 2025-05-19 21:53:07.044658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:07.044989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:07.045906 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.046767 | orchestrator | 2025-05-19 21:53:07.047821 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-19 21:53:07.048509 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.145) 0:00:59.002 ************ 2025-05-19 21:53:07.173817 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.174423 | orchestrator | 2025-05-19 21:53:07.175351 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-19 21:53:07.176233 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.129) 0:00:59.132 ************ 2025-05-19 21:53:07.317846 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:07.319613 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:07.320771 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.322004 | orchestrator | 2025-05-19 21:53:07.322464 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-19 21:53:07.323388 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.144) 0:00:59.276 ************ 2025-05-19 21:53:07.456272 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.457201 | orchestrator | 2025-05-19 21:53:07.458275 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-19 21:53:07.459636 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.137) 0:00:59.413 ************ 2025-05-19 21:53:07.603669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:07.604503 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:07.606457 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.607839 | orchestrator | 2025-05-19 21:53:07.608719 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-19 21:53:07.609909 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.146) 0:00:59.560 ************ 2025-05-19 21:53:07.740102 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.740816 | orchestrator | 2025-05-19 21:53:07.741720 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-19 21:53:07.743174 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.136) 0:00:59.696 ************ 2025-05-19 21:53:07.889367 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:07.889615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:07.891590 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:07.892593 | orchestrator | 2025-05-19 21:53:07.893752 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-19 21:53:07.894520 | orchestrator | Monday 19 May 2025 21:53:07 +0000 (0:00:00.150) 0:00:59.847 ************ 2025-05-19 21:53:08.216446 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:08.216612 | orchestrator | 2025-05-19 21:53:08.217428 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-19 21:53:08.219603 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.326) 0:01:00.174 ************ 2025-05-19 21:53:08.371638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:08.373431 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:08.373467 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:08.373796 | orchestrator | 2025-05-19 21:53:08.374701 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-19 21:53:08.375924 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.154) 0:01:00.328 ************ 2025-05-19 21:53:08.521747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:08.522421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:08.524843 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:08.524871 | orchestrator | 2025-05-19 21:53:08.524883 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-19 21:53:08.524896 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.152) 0:01:00.480 ************ 2025-05-19 21:53:08.671254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:08.672136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:08.673097 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:08.673959 | orchestrator | 2025-05-19 21:53:08.675706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-19 21:53:08.676220 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.148) 0:01:00.629 ************ 2025-05-19 21:53:08.814069 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:08.814342 | orchestrator | 2025-05-19 21:53:08.814790 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-19 21:53:08.815443 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.144) 0:01:00.773 ************ 2025-05-19 21:53:08.957633 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:08.958344 | orchestrator | 2025-05-19 21:53:08.959077 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-19 21:53:08.959864 | orchestrator | Monday 19 May 2025 21:53:08 +0000 (0:00:00.142) 0:01:00.916 ************ 2025-05-19 21:53:09.086234 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:09.087647 | orchestrator | 2025-05-19 21:53:09.088426 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-19 21:53:09.089881 | orchestrator | Monday 19 May 2025 21:53:09 +0000 (0:00:00.128) 0:01:01.044 ************ 2025-05-19 21:53:09.221700 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:53:09.222831 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-19 21:53:09.223766 | orchestrator | } 2025-05-19 21:53:09.225211 | orchestrator | 2025-05-19 21:53:09.225648 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-19 21:53:09.226459 | orchestrator | Monday 19 May 2025 21:53:09 +0000 (0:00:00.135) 0:01:01.179 ************ 2025-05-19 21:53:09.348694 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:53:09.348930 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-19 21:53:09.349693 | orchestrator | } 2025-05-19 21:53:09.350638 | orchestrator | 2025-05-19 21:53:09.351916 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-19 21:53:09.353253 | orchestrator | Monday 19 May 2025 21:53:09 +0000 (0:00:00.126) 0:01:01.306 ************ 2025-05-19 21:53:09.485362 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:53:09.485818 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-19 21:53:09.486816 | orchestrator | } 2025-05-19 21:53:09.488756 | orchestrator | 2025-05-19 21:53:09.488951 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-19 21:53:09.489437 | orchestrator | Monday 19 May 2025 21:53:09 +0000 (0:00:00.136) 0:01:01.443 ************ 2025-05-19 21:53:10.012923 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:10.013151 | orchestrator | 2025-05-19 21:53:10.014205 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-19 21:53:10.014594 | orchestrator | Monday 19 May 2025 21:53:10 +0000 (0:00:00.526) 0:01:01.969 ************ 2025-05-19 21:53:10.510580 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:10.511461 | orchestrator | 2025-05-19 21:53:10.511808 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-19 21:53:10.512635 | orchestrator | Monday 19 May 2025 21:53:10 +0000 (0:00:00.499) 0:01:02.469 ************ 2025-05-19 21:53:11.187234 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:11.187402 | orchestrator | 2025-05-19 21:53:11.187420 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-19 21:53:11.187456 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.671) 0:01:03.141 ************ 2025-05-19 21:53:11.327452 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:11.329179 | orchestrator | 2025-05-19 21:53:11.330658 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-19 21:53:11.330821 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.143) 0:01:03.285 ************ 2025-05-19 21:53:11.439406 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:11.440874 | orchestrator | 2025-05-19 21:53:11.441466 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-19 21:53:11.442404 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.112) 0:01:03.397 ************ 2025-05-19 21:53:11.534428 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:11.535771 | orchestrator | 2025-05-19 21:53:11.537157 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-19 21:53:11.538217 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.094) 0:01:03.492 ************ 2025-05-19 21:53:11.660705 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:53:11.661554 | orchestrator |  "vgs_report": { 2025-05-19 21:53:11.662707 | orchestrator |  "vg": [] 2025-05-19 21:53:11.663146 | orchestrator |  } 2025-05-19 21:53:11.663657 | orchestrator | } 2025-05-19 21:53:11.664054 | orchestrator | 2025-05-19 21:53:11.664946 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-19 21:53:11.665045 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.124) 0:01:03.617 ************ 2025-05-19 21:53:11.792470 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:11.792901 | orchestrator | 2025-05-19 21:53:11.794280 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-19 21:53:11.794592 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.132) 0:01:03.750 ************ 2025-05-19 21:53:11.912002 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:11.912765 | orchestrator | 2025-05-19 21:53:11.914565 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-19 21:53:11.915275 | orchestrator | Monday 19 May 2025 21:53:11 +0000 (0:00:00.119) 0:01:03.869 ************ 2025-05-19 21:53:12.035978 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.036375 | orchestrator | 2025-05-19 21:53:12.037415 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-19 21:53:12.038315 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.125) 0:01:03.994 ************ 2025-05-19 21:53:12.159957 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.160473 | orchestrator | 2025-05-19 21:53:12.161359 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-19 21:53:12.162672 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.123) 0:01:04.118 ************ 2025-05-19 21:53:12.286348 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.286449 | orchestrator | 2025-05-19 21:53:12.286551 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-19 21:53:12.286733 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.126) 0:01:04.244 ************ 2025-05-19 21:53:12.407879 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.408916 | orchestrator | 2025-05-19 21:53:12.409526 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-19 21:53:12.411268 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.121) 0:01:04.366 ************ 2025-05-19 21:53:12.539410 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.540532 | orchestrator | 2025-05-19 21:53:12.541506 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-19 21:53:12.542470 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.131) 0:01:04.497 ************ 2025-05-19 21:53:12.667240 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.667487 | orchestrator | 2025-05-19 21:53:12.668370 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-19 21:53:12.670909 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.126) 0:01:04.624 ************ 2025-05-19 21:53:12.973104 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:12.973344 | orchestrator | 2025-05-19 21:53:12.974079 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-19 21:53:12.974780 | orchestrator | Monday 19 May 2025 21:53:12 +0000 (0:00:00.307) 0:01:04.932 ************ 2025-05-19 21:53:13.110124 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.111129 | orchestrator | 2025-05-19 21:53:13.112459 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-19 21:53:13.113485 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.136) 0:01:05.068 ************ 2025-05-19 21:53:13.261772 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.262166 | orchestrator | 2025-05-19 21:53:13.262831 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-19 21:53:13.263462 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.152) 0:01:05.220 ************ 2025-05-19 21:53:13.390892 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.391103 | orchestrator | 2025-05-19 21:53:13.391496 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-19 21:53:13.391914 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.129) 0:01:05.350 ************ 2025-05-19 21:53:13.515695 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.515786 | orchestrator | 2025-05-19 21:53:13.515818 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-19 21:53:13.515831 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.118) 0:01:05.469 ************ 2025-05-19 21:53:13.628829 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.630231 | orchestrator | 2025-05-19 21:53:13.630279 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-19 21:53:13.630635 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.118) 0:01:05.587 ************ 2025-05-19 21:53:13.779494 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:13.781254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:13.781591 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.783082 | orchestrator | 2025-05-19 21:53:13.785279 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-19 21:53:13.785328 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.149) 0:01:05.737 ************ 2025-05-19 21:53:13.915684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:13.917430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:13.918938 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:13.920195 | orchestrator | 2025-05-19 21:53:13.921099 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-19 21:53:13.922081 | orchestrator | Monday 19 May 2025 21:53:13 +0000 (0:00:00.135) 0:01:05.873 ************ 2025-05-19 21:53:14.079981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:14.080893 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:14.083260 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:14.085458 | orchestrator | 2025-05-19 21:53:14.086932 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-19 21:53:14.088238 | orchestrator | Monday 19 May 2025 21:53:14 +0000 (0:00:00.164) 0:01:06.037 ************ 2025-05-19 21:53:14.233506 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:14.234098 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:14.235381 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:14.236316 | orchestrator | 2025-05-19 21:53:14.237440 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-19 21:53:14.238266 | orchestrator | Monday 19 May 2025 21:53:14 +0000 (0:00:00.153) 0:01:06.191 ************ 2025-05-19 21:53:14.390107 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:14.392564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:14.393819 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:14.395414 | orchestrator | 2025-05-19 21:53:14.396994 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-19 21:53:14.397969 | orchestrator | Monday 19 May 2025 21:53:14 +0000 (0:00:00.156) 0:01:06.347 ************ 2025-05-19 21:53:14.546829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:14.549646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:14.550700 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:14.552331 | orchestrator | 2025-05-19 21:53:14.553455 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-19 21:53:14.554789 | orchestrator | Monday 19 May 2025 21:53:14 +0000 (0:00:00.156) 0:01:06.503 ************ 2025-05-19 21:53:14.921162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:14.923017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:14.926677 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:14.927181 | orchestrator | 2025-05-19 21:53:14.929433 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-19 21:53:14.931167 | orchestrator | Monday 19 May 2025 21:53:14 +0000 (0:00:00.374) 0:01:06.877 ************ 2025-05-19 21:53:15.075712 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:15.078591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:15.080262 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:15.081278 | orchestrator | 2025-05-19 21:53:15.082312 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-19 21:53:15.083124 | orchestrator | Monday 19 May 2025 21:53:15 +0000 (0:00:00.154) 0:01:07.032 ************ 2025-05-19 21:53:15.581713 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:15.583604 | orchestrator | 2025-05-19 21:53:15.584167 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-19 21:53:15.584405 | orchestrator | Monday 19 May 2025 21:53:15 +0000 (0:00:00.506) 0:01:07.539 ************ 2025-05-19 21:53:16.093202 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:16.096503 | orchestrator | 2025-05-19 21:53:16.099610 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-19 21:53:16.100903 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.509) 0:01:08.049 ************ 2025-05-19 21:53:16.231868 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:16.233115 | orchestrator | 2025-05-19 21:53:16.237939 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-19 21:53:16.238203 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.141) 0:01:08.190 ************ 2025-05-19 21:53:16.402093 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'vg_name': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}) 2025-05-19 21:53:16.403391 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'vg_name': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'}) 2025-05-19 21:53:16.404930 | orchestrator | 2025-05-19 21:53:16.408663 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-19 21:53:16.409434 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.168) 0:01:08.359 ************ 2025-05-19 21:53:16.554825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:16.560630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:16.560688 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:16.560703 | orchestrator | 2025-05-19 21:53:16.560765 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-19 21:53:16.561599 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.152) 0:01:08.512 ************ 2025-05-19 21:53:16.710583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:16.713690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:16.713918 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:16.715370 | orchestrator | 2025-05-19 21:53:16.716753 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-19 21:53:16.717752 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.153) 0:01:08.666 ************ 2025-05-19 21:53:16.860391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'})  2025-05-19 21:53:16.865699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'})  2025-05-19 21:53:16.865755 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:16.865770 | orchestrator | 2025-05-19 21:53:16.867356 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-19 21:53:16.868578 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.151) 0:01:08.817 ************ 2025-05-19 21:53:17.004040 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 21:53:17.005744 | orchestrator |  "lvm_report": { 2025-05-19 21:53:17.006874 | orchestrator |  "lv": [ 2025-05-19 21:53:17.008600 | orchestrator |  { 2025-05-19 21:53:17.009500 | orchestrator |  "lv_name": "osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8", 2025-05-19 21:53:17.011866 | orchestrator |  "vg_name": "ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8" 2025-05-19 21:53:17.013087 | orchestrator |  }, 2025-05-19 21:53:17.014504 | orchestrator |  { 2025-05-19 21:53:17.015547 | orchestrator |  "lv_name": "osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b", 2025-05-19 21:53:17.016559 | orchestrator |  "vg_name": "ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b" 2025-05-19 21:53:17.017903 | orchestrator |  } 2025-05-19 21:53:17.018915 | orchestrator |  ], 2025-05-19 21:53:17.019825 | orchestrator |  "pv": [ 2025-05-19 21:53:17.020910 | orchestrator |  { 2025-05-19 21:53:17.021742 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-19 21:53:17.022797 | orchestrator |  "vg_name": "ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b" 2025-05-19 21:53:17.023002 | orchestrator |  }, 2025-05-19 21:53:17.023911 | orchestrator |  { 2025-05-19 21:53:17.024623 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-19 21:53:17.025765 | orchestrator |  "vg_name": "ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8" 2025-05-19 21:53:17.026513 | orchestrator |  } 2025-05-19 21:53:17.026919 | orchestrator |  ] 2025-05-19 21:53:17.027750 | orchestrator |  } 2025-05-19 21:53:17.028242 | orchestrator | } 2025-05-19 21:53:17.029054 | orchestrator | 2025-05-19 21:53:17.029469 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:53:17.030573 | orchestrator | 2025-05-19 21:53:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:53:17.030598 | orchestrator | 2025-05-19 21:53:17 | INFO  | Please wait and do not abort execution. 2025-05-19 21:53:17.030931 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 21:53:17.031522 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 21:53:17.032241 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-19 21:53:17.032660 | orchestrator | 2025-05-19 21:53:17.033359 | orchestrator | 2025-05-19 21:53:17.033882 | orchestrator | 2025-05-19 21:53:17.034567 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:53:17.034803 | orchestrator | Monday 19 May 2025 21:53:16 +0000 (0:00:00.145) 0:01:08.962 ************ 2025-05-19 21:53:17.035458 | orchestrator | =============================================================================== 2025-05-19 21:53:17.035652 | orchestrator | Create block VGs -------------------------------------------------------- 5.63s 2025-05-19 21:53:17.036198 | orchestrator | Create block LVs -------------------------------------------------------- 4.01s 2025-05-19 21:53:17.036585 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-05-19 21:53:17.037012 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.72s 2025-05-19 21:53:17.037508 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-05-19 21:53:17.037785 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-05-19 21:53:17.038310 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.55s 2025-05-19 21:53:17.038735 | orchestrator | Add known partitions to the list of available block devices ------------- 1.38s 2025-05-19 21:53:17.039238 | orchestrator | Add known links to the list of available block devices ------------------ 1.10s 2025-05-19 21:53:17.039646 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-05-19 21:53:17.040075 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2025-05-19 21:53:17.040326 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-05-19 21:53:17.040772 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-05-19 21:53:17.041364 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-05-19 21:53:17.041720 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-19 21:53:17.042092 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.67s 2025-05-19 21:53:17.042544 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.66s 2025-05-19 21:53:17.042939 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-05-19 21:53:17.043245 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-05-19 21:53:17.043766 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2025-05-19 21:53:19.366603 | orchestrator | 2025-05-19 21:53:19 | INFO  | Task c3455867-a4e2-414f-b4f9-31c32a12faa0 (facts) was prepared for execution. 2025-05-19 21:53:19.366717 | orchestrator | 2025-05-19 21:53:19 | INFO  | It takes a moment until task c3455867-a4e2-414f-b4f9-31c32a12faa0 (facts) has been started and output is visible here. 2025-05-19 21:53:23.148712 | orchestrator | 2025-05-19 21:53:23.152555 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 21:53:23.153949 | orchestrator | 2025-05-19 21:53:23.154698 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 21:53:23.157152 | orchestrator | Monday 19 May 2025 21:53:23 +0000 (0:00:00.194) 0:00:00.194 ************ 2025-05-19 21:53:24.012275 | orchestrator | ok: [testbed-manager] 2025-05-19 21:53:24.012873 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:53:24.016512 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:53:24.016538 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:53:24.016549 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:53:24.016560 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:53:24.016571 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:24.017399 | orchestrator | 2025-05-19 21:53:24.018795 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 21:53:24.019866 | orchestrator | Monday 19 May 2025 21:53:24 +0000 (0:00:00.861) 0:00:01.056 ************ 2025-05-19 21:53:24.158186 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:53:24.227141 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:53:24.297324 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:53:24.365455 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:53:24.433843 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:53:25.054633 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:53:25.054783 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:25.054841 | orchestrator | 2025-05-19 21:53:25.055070 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 21:53:25.055392 | orchestrator | 2025-05-19 21:53:25.055788 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 21:53:25.056077 | orchestrator | Monday 19 May 2025 21:53:25 +0000 (0:00:01.047) 0:00:02.103 ************ 2025-05-19 21:53:29.863768 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:53:29.864396 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:53:29.865676 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:53:29.869577 | orchestrator | ok: [testbed-manager] 2025-05-19 21:53:29.869598 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:53:29.869610 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:53:29.869621 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:53:29.869632 | orchestrator | 2025-05-19 21:53:29.869686 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 21:53:29.870870 | orchestrator | 2025-05-19 21:53:29.871121 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 21:53:29.871826 | orchestrator | Monday 19 May 2025 21:53:29 +0000 (0:00:04.807) 0:00:06.910 ************ 2025-05-19 21:53:30.018706 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:53:30.090774 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:53:30.165362 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:53:30.239765 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:53:30.313493 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:53:30.353606 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:53:30.353941 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:53:30.354898 | orchestrator | 2025-05-19 21:53:30.355683 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:53:30.355970 | orchestrator | 2025-05-19 21:53:30 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 21:53:30.356176 | orchestrator | 2025-05-19 21:53:30 | INFO  | Please wait and do not abort execution. 2025-05-19 21:53:30.357412 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.358014 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.358532 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.359305 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.360935 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.361329 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.362412 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:53:30.363491 | orchestrator | 2025-05-19 21:53:30.363511 | orchestrator | 2025-05-19 21:53:30.363523 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:53:30.363834 | orchestrator | Monday 19 May 2025 21:53:30 +0000 (0:00:00.490) 0:00:07.401 ************ 2025-05-19 21:53:30.365201 | orchestrator | =============================================================================== 2025-05-19 21:53:30.365714 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.81s 2025-05-19 21:53:30.366550 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2025-05-19 21:53:30.367496 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.86s 2025-05-19 21:53:30.367781 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-05-19 21:53:30.982194 | orchestrator | 2025-05-19 21:53:30.984301 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon May 19 21:53:30 UTC 2025 2025-05-19 21:53:30.984328 | orchestrator | 2025-05-19 21:53:32.628485 | orchestrator | 2025-05-19 21:53:32 | INFO  | Collection nutshell is prepared for execution 2025-05-19 21:53:32.628594 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [0] - dotfiles 2025-05-19 21:53:32.635354 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [0] - homer 2025-05-19 21:53:32.635390 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [0] - netdata 2025-05-19 21:53:32.635451 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [0] - openstackclient 2025-05-19 21:53:32.635656 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [0] - phpmyadmin 2025-05-19 21:53:32.635718 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [0] - common 2025-05-19 21:53:32.637723 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [1] -- loadbalancer 2025-05-19 21:53:32.637746 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [2] --- opensearch 2025-05-19 21:53:32.637803 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [2] --- mariadb-ng 2025-05-19 21:53:32.637817 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [3] ---- horizon 2025-05-19 21:53:32.637907 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [3] ---- keystone 2025-05-19 21:53:32.638151 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [4] ----- neutron 2025-05-19 21:53:32.638226 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ wait-for-nova 2025-05-19 21:53:32.638367 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [5] ------ octavia 2025-05-19 21:53:32.639065 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- barbican 2025-05-19 21:53:32.639328 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- designate 2025-05-19 21:53:32.639427 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- ironic 2025-05-19 21:53:32.639522 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- placement 2025-05-19 21:53:32.639650 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- magnum 2025-05-19 21:53:32.640103 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [1] -- openvswitch 2025-05-19 21:53:32.640187 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [2] --- ovn 2025-05-19 21:53:32.640503 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [1] -- memcached 2025-05-19 21:53:32.640748 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [1] -- redis 2025-05-19 21:53:32.640899 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [1] -- rabbitmq-ng 2025-05-19 21:53:32.640976 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [0] - kubernetes 2025-05-19 21:53:32.643968 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [1] -- kubeconfig 2025-05-19 21:53:32.644062 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [1] -- copy-kubeconfig 2025-05-19 21:53:32.644078 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [0] - ceph 2025-05-19 21:53:32.644625 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [1] -- ceph-pools 2025-05-19 21:53:32.644654 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [2] --- copy-ceph-keys 2025-05-19 21:53:32.644942 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [3] ---- cephclient 2025-05-19 21:53:32.644975 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-19 21:53:32.645077 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [4] ----- wait-for-keystone 2025-05-19 21:53:32.645312 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-19 21:53:32.645340 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ glance 2025-05-19 21:53:32.645432 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ cinder 2025-05-19 21:53:32.645532 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ nova 2025-05-19 21:53:32.645860 | orchestrator | 2025-05-19 21:53:32 | INFO  | A [4] ----- prometheus 2025-05-19 21:53:32.646112 | orchestrator | 2025-05-19 21:53:32 | INFO  | D [5] ------ grafana 2025-05-19 21:53:32.825129 | orchestrator | 2025-05-19 21:53:32 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-19 21:53:32.827424 | orchestrator | 2025-05-19 21:53:32 | INFO  | Tasks are running in the background 2025-05-19 21:53:35.435937 | orchestrator | 2025-05-19 21:53:35 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-19 21:53:37.556836 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:37.557000 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:37.557675 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:37.557956 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:37.558596 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:37.563219 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:37.563255 | orchestrator | 2025-05-19 21:53:37 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:37.566367 | orchestrator | 2025-05-19 21:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:40.613764 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:40.613870 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:40.614079 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:40.618630 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:40.619048 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:40.619548 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:40.620059 | orchestrator | 2025-05-19 21:53:40 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:40.620081 | orchestrator | 2025-05-19 21:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:43.662933 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:43.663024 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:43.663109 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:43.663462 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:43.663995 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:43.665028 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:43.671393 | orchestrator | 2025-05-19 21:53:43 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:43.671421 | orchestrator | 2025-05-19 21:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:46.728060 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:46.728192 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:46.728210 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:46.728233 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:46.728245 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:46.728256 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:46.731163 | orchestrator | 2025-05-19 21:53:46 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:46.731197 | orchestrator | 2025-05-19 21:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:49.774547 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:49.774661 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:49.774688 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:49.774708 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:49.775432 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:49.782394 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:49.783980 | orchestrator | 2025-05-19 21:53:49 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:49.784261 | orchestrator | 2025-05-19 21:53:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:52.855519 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:52.858360 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:52.866677 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:52.869493 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:52.871722 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:52.877893 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:52.879820 | orchestrator | 2025-05-19 21:53:52 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:52.880820 | orchestrator | 2025-05-19 21:53:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:55.946795 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:55.948481 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:55.949386 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:55.951023 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:55.951843 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:55.954265 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:55.954672 | orchestrator | 2025-05-19 21:53:55 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:55.955340 | orchestrator | 2025-05-19 21:53:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:53:59.031894 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state STARTED 2025-05-19 21:53:59.032011 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:53:59.032028 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:53:59.032488 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:53:59.033387 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:53:59.033412 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:53:59.039146 | orchestrator | 2025-05-19 21:53:59 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:53:59.039172 | orchestrator | 2025-05-19 21:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:02.102976 | orchestrator | 2025-05-19 21:54:02.103073 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-19 21:54:02.103089 | orchestrator | 2025-05-19 21:54:02.103101 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-19 21:54:02.103113 | orchestrator | Monday 19 May 2025 21:53:45 +0000 (0:00:00.683) 0:00:00.683 ************ 2025-05-19 21:54:02.103123 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:02.103135 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:02.103146 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:02.103157 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:02.103168 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:02.103179 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:02.103190 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:02.103200 | orchestrator | 2025-05-19 21:54:02.103211 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-19 21:54:02.103222 | orchestrator | Monday 19 May 2025 21:53:50 +0000 (0:00:04.637) 0:00:05.320 ************ 2025-05-19 21:54:02.103233 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-19 21:54:02.103244 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 21:54:02.103255 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 21:54:02.103266 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 21:54:02.103345 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 21:54:02.103363 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 21:54:02.103374 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 21:54:02.103385 | orchestrator | 2025-05-19 21:54:02.103396 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-19 21:54:02.103408 | orchestrator | Monday 19 May 2025 21:53:52 +0000 (0:00:01.989) 0:00:07.310 ************ 2025-05-19 21:54:02.103423 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.111950', 'end': '2025-05-19 21:53:51.115297', 'delta': '0:00:00.003347', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103446 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.200326', 'end': '2025-05-19 21:53:51.210586', 'delta': '0:00:00.010260', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103459 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.131139', 'end': '2025-05-19 21:53:51.140566', 'delta': '0:00:00.009427', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103518 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.199429', 'end': '2025-05-19 21:53:51.206288', 'delta': '0:00:00.006859', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103533 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.253968', 'end': '2025-05-19 21:53:51.259019', 'delta': '0:00:00.005051', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103547 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:51.631477', 'end': '2025-05-19 21:53:51.641431', 'delta': '0:00:00.009954', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103564 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-19 21:53:52.045607', 'end': '2025-05-19 21:53:52.054417', 'delta': '0:00:00.008810', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-19 21:54:02.103578 | orchestrator | 2025-05-19 21:54:02.103591 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-19 21:54:02.103604 | orchestrator | Monday 19 May 2025 21:53:54 +0000 (0:00:01.819) 0:00:09.129 ************ 2025-05-19 21:54:02.103627 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-19 21:54:02.103641 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 21:54:02.103654 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 21:54:02.103666 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 21:54:02.103678 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 21:54:02.103690 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 21:54:02.103703 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 21:54:02.103715 | orchestrator | 2025-05-19 21:54:02.103727 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-19 21:54:02.103740 | orchestrator | Monday 19 May 2025 21:53:56 +0000 (0:00:02.386) 0:00:11.519 ************ 2025-05-19 21:54:02.103752 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-19 21:54:02.103765 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-19 21:54:02.103778 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-19 21:54:02.103790 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-19 21:54:02.103802 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-19 21:54:02.103815 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-19 21:54:02.103827 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-19 21:54:02.103839 | orchestrator | 2025-05-19 21:54:02.103852 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:54:02.103872 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103885 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103896 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103907 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103918 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103929 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103940 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:02.103951 | orchestrator | 2025-05-19 21:54:02.103962 | orchestrator | 2025-05-19 21:54:02.103973 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:54:02.103984 | orchestrator | Monday 19 May 2025 21:53:59 +0000 (0:00:03.413) 0:00:14.933 ************ 2025-05-19 21:54:02.103995 | orchestrator | =============================================================================== 2025-05-19 21:54:02.104006 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.64s 2025-05-19 21:54:02.104017 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.41s 2025-05-19 21:54:02.104028 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.39s 2025-05-19 21:54:02.104038 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.99s 2025-05-19 21:54:02.104049 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.82s 2025-05-19 21:54:02.104092 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:02.104105 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task eb174827-02e1-4a7d-a445-a2336ffd87e3 is in state SUCCESS 2025-05-19 21:54:02.104123 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:02.104135 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:54:02.104146 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:02.104156 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:02.104168 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:02.106205 | orchestrator | 2025-05-19 21:54:02 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:02.106230 | orchestrator | 2025-05-19 21:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:05.157733 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:05.157848 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:05.157873 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:54:05.157895 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:05.158675 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:05.160077 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:05.161732 | orchestrator | 2025-05-19 21:54:05 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:05.161778 | orchestrator | 2025-05-19 21:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:08.213471 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:08.213582 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:08.214010 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:54:08.216066 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:08.218835 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:08.219204 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:08.219912 | orchestrator | 2025-05-19 21:54:08 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:08.219940 | orchestrator | 2025-05-19 21:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:11.295392 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:11.295491 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:11.295573 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:54:11.296071 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:11.296844 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:11.297128 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:11.297951 | orchestrator | 2025-05-19 21:54:11 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:11.297972 | orchestrator | 2025-05-19 21:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:14.379466 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:14.381768 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:14.382789 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state STARTED 2025-05-19 21:54:14.385010 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:14.385894 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:14.388785 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:14.388949 | orchestrator | 2025-05-19 21:54:14 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:14.389360 | orchestrator | 2025-05-19 21:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:17.465473 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:17.465589 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:17.465675 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task af0252aa-6a74-4c32-be52-4c55cd214a2e is in state SUCCESS 2025-05-19 21:54:17.469155 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:17.469561 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:17.475214 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:17.475260 | orchestrator | 2025-05-19 21:54:17 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:17.475308 | orchestrator | 2025-05-19 21:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:20.524137 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:20.524735 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:20.529596 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:20.529635 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:20.534533 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:20.534663 | orchestrator | 2025-05-19 21:54:20 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:20.536019 | orchestrator | 2025-05-19 21:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:23.622952 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:23.623966 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:23.624011 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:23.625183 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:23.640117 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:23.647577 | orchestrator | 2025-05-19 21:54:23 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:23.647610 | orchestrator | 2025-05-19 21:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:26.682685 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:26.682877 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:26.684065 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:26.684090 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:26.686241 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:26.687279 | orchestrator | 2025-05-19 21:54:26 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:26.687302 | orchestrator | 2025-05-19 21:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:29.742627 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:29.745239 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:29.750536 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:29.754231 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:29.755452 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state STARTED 2025-05-19 21:54:29.757675 | orchestrator | 2025-05-19 21:54:29 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:29.758990 | orchestrator | 2025-05-19 21:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:32.819491 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:32.821501 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:32.822974 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:32.823932 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:32.825445 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task 3d63b05e-0d2d-4e1b-a6c9-5376b956fc69 is in state SUCCESS 2025-05-19 21:54:32.827010 | orchestrator | 2025-05-19 21:54:32 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:32.827040 | orchestrator | 2025-05-19 21:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:35.872242 | orchestrator | 2025-05-19 21:54:35 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:35.873902 | orchestrator | 2025-05-19 21:54:35 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:35.874551 | orchestrator | 2025-05-19 21:54:35 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:35.875596 | orchestrator | 2025-05-19 21:54:35 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:35.880108 | orchestrator | 2025-05-19 21:54:35 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:35.880143 | orchestrator | 2025-05-19 21:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:38.924862 | orchestrator | 2025-05-19 21:54:38 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:38.928128 | orchestrator | 2025-05-19 21:54:38 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:38.928782 | orchestrator | 2025-05-19 21:54:38 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:38.935253 | orchestrator | 2025-05-19 21:54:38 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:38.935327 | orchestrator | 2025-05-19 21:54:38 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:38.935337 | orchestrator | 2025-05-19 21:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:41.987799 | orchestrator | 2025-05-19 21:54:41 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:41.988540 | orchestrator | 2025-05-19 21:54:41 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:41.991247 | orchestrator | 2025-05-19 21:54:41 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:41.992617 | orchestrator | 2025-05-19 21:54:41 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:41.994555 | orchestrator | 2025-05-19 21:54:41 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state STARTED 2025-05-19 21:54:41.994620 | orchestrator | 2025-05-19 21:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:45.039169 | orchestrator | 2025-05-19 21:54:45 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:45.040019 | orchestrator | 2025-05-19 21:54:45 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:45.042223 | orchestrator | 2025-05-19 21:54:45 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:45.043467 | orchestrator | 2025-05-19 21:54:45 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:45.046167 | orchestrator | 2025-05-19 21:54:45.046210 | orchestrator | 2025-05-19 21:54:45.046223 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-19 21:54:45.046236 | orchestrator | 2025-05-19 21:54:45.046247 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-19 21:54:45.046287 | orchestrator | Monday 19 May 2025 21:53:43 +0000 (0:00:00.548) 0:00:00.548 ************ 2025-05-19 21:54:45.046300 | orchestrator | ok: [testbed-manager] => { 2025-05-19 21:54:45.046314 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-19 21:54:45.046327 | orchestrator | } 2025-05-19 21:54:45.046339 | orchestrator | 2025-05-19 21:54:45.046357 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-19 21:54:45.046369 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.341) 0:00:00.889 ************ 2025-05-19 21:54:45.046380 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.046392 | orchestrator | 2025-05-19 21:54:45.046403 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-19 21:54:45.046414 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:01.854) 0:00:02.744 ************ 2025-05-19 21:54:45.046446 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-19 21:54:45.046458 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-19 21:54:45.046469 | orchestrator | 2025-05-19 21:54:45.046480 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-19 21:54:45.046491 | orchestrator | Monday 19 May 2025 21:53:47 +0000 (0:00:01.780) 0:00:04.524 ************ 2025-05-19 21:54:45.046502 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.046513 | orchestrator | 2025-05-19 21:54:45.046524 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-19 21:54:45.046535 | orchestrator | Monday 19 May 2025 21:53:49 +0000 (0:00:01.720) 0:00:06.245 ************ 2025-05-19 21:54:45.046545 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.046556 | orchestrator | 2025-05-19 21:54:45.046567 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-19 21:54:45.046578 | orchestrator | Monday 19 May 2025 21:53:50 +0000 (0:00:01.028) 0:00:07.273 ************ 2025-05-19 21:54:45.046589 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-19 21:54:45.046600 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.046611 | orchestrator | 2025-05-19 21:54:45.046622 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-19 21:54:45.046634 | orchestrator | Monday 19 May 2025 21:54:14 +0000 (0:00:24.201) 0:00:31.475 ************ 2025-05-19 21:54:45.046644 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.046655 | orchestrator | 2025-05-19 21:54:45.046666 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:54:45.046678 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.046690 | orchestrator | 2025-05-19 21:54:45.046701 | orchestrator | 2025-05-19 21:54:45.046712 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:54:45.046723 | orchestrator | Monday 19 May 2025 21:54:16 +0000 (0:00:02.026) 0:00:33.502 ************ 2025-05-19 21:54:45.046734 | orchestrator | =============================================================================== 2025-05-19 21:54:45.046745 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.20s 2025-05-19 21:54:45.046756 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.03s 2025-05-19 21:54:45.046767 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.86s 2025-05-19 21:54:45.046778 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.78s 2025-05-19 21:54:45.046789 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.72s 2025-05-19 21:54:45.046799 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.03s 2025-05-19 21:54:45.046810 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2025-05-19 21:54:45.046821 | orchestrator | 2025-05-19 21:54:45.046832 | orchestrator | 2025-05-19 21:54:45.046843 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-19 21:54:45.046854 | orchestrator | 2025-05-19 21:54:45.046864 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-19 21:54:45.046875 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.553) 0:00:00.553 ************ 2025-05-19 21:54:45.046887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-19 21:54:45.046899 | orchestrator | 2025-05-19 21:54:45.046910 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-19 21:54:45.046921 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.733) 0:00:01.286 ************ 2025-05-19 21:54:45.046932 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-19 21:54:45.046950 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-19 21:54:45.046961 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-19 21:54:45.046972 | orchestrator | 2025-05-19 21:54:45.046983 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-19 21:54:45.046994 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:01.868) 0:00:03.155 ************ 2025-05-19 21:54:45.047005 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.047016 | orchestrator | 2025-05-19 21:54:45.047027 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-19 21:54:45.047039 | orchestrator | Monday 19 May 2025 21:53:48 +0000 (0:00:01.762) 0:00:04.918 ************ 2025-05-19 21:54:45.047063 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-19 21:54:45.047074 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.047085 | orchestrator | 2025-05-19 21:54:45.047096 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-19 21:54:45.047107 | orchestrator | Monday 19 May 2025 21:54:24 +0000 (0:00:35.562) 0:00:40.480 ************ 2025-05-19 21:54:45.047118 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.047129 | orchestrator | 2025-05-19 21:54:45.047140 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-19 21:54:45.047151 | orchestrator | Monday 19 May 2025 21:54:25 +0000 (0:00:00.963) 0:00:41.443 ************ 2025-05-19 21:54:45.047162 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.047172 | orchestrator | 2025-05-19 21:54:45.047188 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-19 21:54:45.047199 | orchestrator | Monday 19 May 2025 21:54:25 +0000 (0:00:00.566) 0:00:42.010 ************ 2025-05-19 21:54:45.047210 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.047221 | orchestrator | 2025-05-19 21:54:45.047232 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-19 21:54:45.047243 | orchestrator | Monday 19 May 2025 21:54:27 +0000 (0:00:01.599) 0:00:43.609 ************ 2025-05-19 21:54:45.047274 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.047293 | orchestrator | 2025-05-19 21:54:45.047313 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-19 21:54:45.047331 | orchestrator | Monday 19 May 2025 21:54:28 +0000 (0:00:01.082) 0:00:44.691 ************ 2025-05-19 21:54:45.047349 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.047361 | orchestrator | 2025-05-19 21:54:45.047372 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-19 21:54:45.047383 | orchestrator | Monday 19 May 2025 21:54:28 +0000 (0:00:00.663) 0:00:45.355 ************ 2025-05-19 21:54:45.047394 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.047405 | orchestrator | 2025-05-19 21:54:45.047416 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:54:45.047427 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.047438 | orchestrator | 2025-05-19 21:54:45.047449 | orchestrator | 2025-05-19 21:54:45.047460 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:54:45.047471 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.421) 0:00:45.776 ************ 2025-05-19 21:54:45.047482 | orchestrator | =============================================================================== 2025-05-19 21:54:45.047493 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.56s 2025-05-19 21:54:45.047504 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.87s 2025-05-19 21:54:45.047515 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.76s 2025-05-19 21:54:45.047526 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.60s 2025-05-19 21:54:45.047537 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.08s 2025-05-19 21:54:45.047555 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2025-05-19 21:54:45.047566 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.73s 2025-05-19 21:54:45.047577 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.66s 2025-05-19 21:54:45.047588 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.57s 2025-05-19 21:54:45.047598 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-05-19 21:54:45.047609 | orchestrator | 2025-05-19 21:54:45.047620 | orchestrator | 2025-05-19 21:54:45.047631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:54:45.047642 | orchestrator | 2025-05-19 21:54:45.047653 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 21:54:45.047664 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.404) 0:00:00.404 ************ 2025-05-19 21:54:45.047675 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-19 21:54:45.047686 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-19 21:54:45.047697 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-19 21:54:45.047708 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-19 21:54:45.047718 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-19 21:54:45.047729 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-19 21:54:45.047740 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-19 21:54:45.047751 | orchestrator | 2025-05-19 21:54:45.047762 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-19 21:54:45.047773 | orchestrator | 2025-05-19 21:54:45.047783 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-19 21:54:45.047794 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:01.998) 0:00:02.403 ************ 2025-05-19 21:54:45.047820 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:54:45.047834 | orchestrator | 2025-05-19 21:54:45.047845 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-19 21:54:45.047856 | orchestrator | Monday 19 May 2025 21:53:48 +0000 (0:00:02.131) 0:00:04.535 ************ 2025-05-19 21:54:45.047867 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.047878 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:54:45.047889 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:54:45.047900 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:54:45.047911 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:54:45.047928 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:54:45.047939 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:54:45.047950 | orchestrator | 2025-05-19 21:54:45.047961 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-19 21:54:45.047972 | orchestrator | Monday 19 May 2025 21:53:50 +0000 (0:00:01.372) 0:00:05.907 ************ 2025-05-19 21:54:45.047983 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.047994 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:54:45.048005 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:54:45.048016 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:54:45.048026 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:54:45.048037 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:54:45.048048 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:54:45.048059 | orchestrator | 2025-05-19 21:54:45.048075 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-19 21:54:45.048086 | orchestrator | Monday 19 May 2025 21:53:53 +0000 (0:00:03.318) 0:00:09.226 ************ 2025-05-19 21:54:45.048097 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.048108 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:45.048119 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:45.048140 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:45.048151 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:45.048162 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:45.048173 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:45.048184 | orchestrator | 2025-05-19 21:54:45.048195 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-19 21:54:45.048207 | orchestrator | Monday 19 May 2025 21:53:55 +0000 (0:00:02.648) 0:00:11.874 ************ 2025-05-19 21:54:45.048218 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.048229 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:45.048239 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:45.048251 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:45.048334 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:45.048346 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:45.048356 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:45.048366 | orchestrator | 2025-05-19 21:54:45.048376 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-19 21:54:45.048386 | orchestrator | Monday 19 May 2025 21:54:08 +0000 (0:00:12.175) 0:00:24.050 ************ 2025-05-19 21:54:45.048395 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.048405 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:45.048415 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:45.048424 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:45.048434 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:45.048444 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:45.048453 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:45.048463 | orchestrator | 2025-05-19 21:54:45.048473 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-19 21:54:45.048483 | orchestrator | Monday 19 May 2025 21:54:24 +0000 (0:00:16.296) 0:00:40.346 ************ 2025-05-19 21:54:45.048494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:54:45.048505 | orchestrator | 2025-05-19 21:54:45.048515 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-19 21:54:45.048525 | orchestrator | Monday 19 May 2025 21:54:25 +0000 (0:00:01.412) 0:00:41.759 ************ 2025-05-19 21:54:45.048535 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-19 21:54:45.048545 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-19 21:54:45.048554 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-19 21:54:45.048564 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-19 21:54:45.048574 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-19 21:54:45.048583 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-19 21:54:45.048593 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-19 21:54:45.048617 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-19 21:54:45.048627 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-19 21:54:45.048647 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-19 21:54:45.048657 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-19 21:54:45.048667 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-19 21:54:45.048676 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-19 21:54:45.048686 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-19 21:54:45.048696 | orchestrator | 2025-05-19 21:54:45.048705 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-19 21:54:45.048715 | orchestrator | Monday 19 May 2025 21:54:30 +0000 (0:00:04.730) 0:00:46.489 ************ 2025-05-19 21:54:45.048725 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.048735 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:54:45.048751 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:54:45.048761 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:54:45.048771 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:54:45.048781 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:54:45.048790 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:54:45.048800 | orchestrator | 2025-05-19 21:54:45.048810 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-19 21:54:45.048820 | orchestrator | Monday 19 May 2025 21:54:31 +0000 (0:00:01.286) 0:00:47.776 ************ 2025-05-19 21:54:45.048830 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.048840 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:45.048849 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:45.048859 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:45.048869 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:45.048878 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:45.048888 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:45.048898 | orchestrator | 2025-05-19 21:54:45.048908 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-19 21:54:45.048924 | orchestrator | Monday 19 May 2025 21:54:33 +0000 (0:00:01.464) 0:00:49.240 ************ 2025-05-19 21:54:45.048935 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.048944 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:54:45.048954 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:54:45.048964 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:54:45.048974 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:54:45.048983 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:54:45.048993 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:54:45.049003 | orchestrator | 2025-05-19 21:54:45.049012 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-19 21:54:45.049022 | orchestrator | Monday 19 May 2025 21:54:34 +0000 (0:00:01.304) 0:00:50.545 ************ 2025-05-19 21:54:45.049032 | orchestrator | ok: [testbed-manager] 2025-05-19 21:54:45.049042 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:54:45.049056 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:54:45.049066 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:54:45.049076 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:54:45.049085 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:54:45.049095 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:54:45.049105 | orchestrator | 2025-05-19 21:54:45.049114 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-19 21:54:45.049124 | orchestrator | Monday 19 May 2025 21:54:36 +0000 (0:00:02.022) 0:00:52.567 ************ 2025-05-19 21:54:45.049134 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-19 21:54:45.049146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:54:45.049156 | orchestrator | 2025-05-19 21:54:45.049166 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-19 21:54:45.049176 | orchestrator | Monday 19 May 2025 21:54:38 +0000 (0:00:01.441) 0:00:54.009 ************ 2025-05-19 21:54:45.049185 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.049195 | orchestrator | 2025-05-19 21:54:45.049205 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-19 21:54:45.049215 | orchestrator | Monday 19 May 2025 21:54:40 +0000 (0:00:02.526) 0:00:56.536 ************ 2025-05-19 21:54:45.049225 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:54:45.049235 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:54:45.049245 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:54:45.049276 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:54:45.049288 | orchestrator | changed: [testbed-manager] 2025-05-19 21:54:45.049297 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:54:45.049307 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:54:45.049323 | orchestrator | 2025-05-19 21:54:45.049333 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:54:45.049343 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049353 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049363 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049372 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049382 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049392 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049401 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:54:45.049411 | orchestrator | 2025-05-19 21:54:45.049421 | orchestrator | 2025-05-19 21:54:45.049431 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:54:45.049440 | orchestrator | Monday 19 May 2025 21:54:43 +0000 (0:00:02.950) 0:00:59.486 ************ 2025-05-19 21:54:45.049450 | orchestrator | =============================================================================== 2025-05-19 21:54:45.049459 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.30s 2025-05-19 21:54:45.049469 | orchestrator | osism.services.netdata : Add repository -------------------------------- 12.18s 2025-05-19 21:54:45.049479 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.73s 2025-05-19 21:54:45.049488 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.32s 2025-05-19 21:54:45.049498 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.95s 2025-05-19 21:54:45.049507 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.65s 2025-05-19 21:54:45.049517 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.53s 2025-05-19 21:54:45.049526 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.13s 2025-05-19 21:54:45.049536 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.02s 2025-05-19 21:54:45.049546 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.00s 2025-05-19 21:54:45.049555 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.46s 2025-05-19 21:54:45.049570 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.44s 2025-05-19 21:54:45.049580 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.41s 2025-05-19 21:54:45.049590 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.37s 2025-05-19 21:54:45.049600 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.30s 2025-05-19 21:54:45.049609 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.29s 2025-05-19 21:54:45.049644 | orchestrator | 2025-05-19 21:54:45 | INFO  | Task 12fde055-b27e-480d-8794-26cd25eaff64 is in state SUCCESS 2025-05-19 21:54:45.049660 | orchestrator | 2025-05-19 21:54:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:48.085553 | orchestrator | 2025-05-19 21:54:48 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:48.086364 | orchestrator | 2025-05-19 21:54:48 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:48.087800 | orchestrator | 2025-05-19 21:54:48 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:48.088604 | orchestrator | 2025-05-19 21:54:48 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:48.089388 | orchestrator | 2025-05-19 21:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:51.130555 | orchestrator | 2025-05-19 21:54:51 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:51.131302 | orchestrator | 2025-05-19 21:54:51 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:51.132180 | orchestrator | 2025-05-19 21:54:51 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:51.133691 | orchestrator | 2025-05-19 21:54:51 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:51.133714 | orchestrator | 2025-05-19 21:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:54.181684 | orchestrator | 2025-05-19 21:54:54 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:54.185135 | orchestrator | 2025-05-19 21:54:54 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:54.187972 | orchestrator | 2025-05-19 21:54:54 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:54.189921 | orchestrator | 2025-05-19 21:54:54 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:54.189947 | orchestrator | 2025-05-19 21:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:54:57.237348 | orchestrator | 2025-05-19 21:54:57 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:54:57.239549 | orchestrator | 2025-05-19 21:54:57 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:54:57.242104 | orchestrator | 2025-05-19 21:54:57 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:54:57.244982 | orchestrator | 2025-05-19 21:54:57 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:54:57.245043 | orchestrator | 2025-05-19 21:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:00.288902 | orchestrator | 2025-05-19 21:55:00 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:00.289030 | orchestrator | 2025-05-19 21:55:00 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:00.289915 | orchestrator | 2025-05-19 21:55:00 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:00.291162 | orchestrator | 2025-05-19 21:55:00 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:00.292984 | orchestrator | 2025-05-19 21:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:03.347337 | orchestrator | 2025-05-19 21:55:03 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:03.347449 | orchestrator | 2025-05-19 21:55:03 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:03.348019 | orchestrator | 2025-05-19 21:55:03 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:03.348595 | orchestrator | 2025-05-19 21:55:03 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:03.348616 | orchestrator | 2025-05-19 21:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:06.400002 | orchestrator | 2025-05-19 21:55:06 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:06.403207 | orchestrator | 2025-05-19 21:55:06 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:06.403283 | orchestrator | 2025-05-19 21:55:06 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:06.403571 | orchestrator | 2025-05-19 21:55:06 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:06.403729 | orchestrator | 2025-05-19 21:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:09.451380 | orchestrator | 2025-05-19 21:55:09 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:09.452156 | orchestrator | 2025-05-19 21:55:09 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:09.453351 | orchestrator | 2025-05-19 21:55:09 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:09.454940 | orchestrator | 2025-05-19 21:55:09 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:09.454967 | orchestrator | 2025-05-19 21:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:12.503534 | orchestrator | 2025-05-19 21:55:12 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:12.503638 | orchestrator | 2025-05-19 21:55:12 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:12.503749 | orchestrator | 2025-05-19 21:55:12 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:12.504615 | orchestrator | 2025-05-19 21:55:12 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:12.504692 | orchestrator | 2025-05-19 21:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:15.550430 | orchestrator | 2025-05-19 21:55:15 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:15.551524 | orchestrator | 2025-05-19 21:55:15 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:15.552385 | orchestrator | 2025-05-19 21:55:15 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:15.553462 | orchestrator | 2025-05-19 21:55:15 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:15.553689 | orchestrator | 2025-05-19 21:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:18.597526 | orchestrator | 2025-05-19 21:55:18 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:18.598278 | orchestrator | 2025-05-19 21:55:18 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:18.599408 | orchestrator | 2025-05-19 21:55:18 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:18.600285 | orchestrator | 2025-05-19 21:55:18 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:18.600332 | orchestrator | 2025-05-19 21:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:21.654500 | orchestrator | 2025-05-19 21:55:21 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:21.655423 | orchestrator | 2025-05-19 21:55:21 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:21.657554 | orchestrator | 2025-05-19 21:55:21 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:21.659537 | orchestrator | 2025-05-19 21:55:21 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:21.659608 | orchestrator | 2025-05-19 21:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:24.723010 | orchestrator | 2025-05-19 21:55:24 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state STARTED 2025-05-19 21:55:24.725041 | orchestrator | 2025-05-19 21:55:24 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:24.725719 | orchestrator | 2025-05-19 21:55:24 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:24.730010 | orchestrator | 2025-05-19 21:55:24 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:24.730100 | orchestrator | 2025-05-19 21:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:27.775413 | orchestrator | 2025-05-19 21:55:27 | INFO  | Task f81f364b-8a54-4051-8d6c-0a84ab7f32db is in state SUCCESS 2025-05-19 21:55:27.777263 | orchestrator | 2025-05-19 21:55:27 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:27.778278 | orchestrator | 2025-05-19 21:55:27 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:27.779134 | orchestrator | 2025-05-19 21:55:27 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:27.779178 | orchestrator | 2025-05-19 21:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:30.821024 | orchestrator | 2025-05-19 21:55:30 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:30.821931 | orchestrator | 2025-05-19 21:55:30 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:30.823080 | orchestrator | 2025-05-19 21:55:30 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:30.823321 | orchestrator | 2025-05-19 21:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:33.885830 | orchestrator | 2025-05-19 21:55:33 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:33.887423 | orchestrator | 2025-05-19 21:55:33 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:33.889945 | orchestrator | 2025-05-19 21:55:33 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:33.890247 | orchestrator | 2025-05-19 21:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:36.945851 | orchestrator | 2025-05-19 21:55:36 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:36.947135 | orchestrator | 2025-05-19 21:55:36 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:36.948984 | orchestrator | 2025-05-19 21:55:36 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:36.950184 | orchestrator | 2025-05-19 21:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:40.006142 | orchestrator | 2025-05-19 21:55:40 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:40.007508 | orchestrator | 2025-05-19 21:55:40 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:40.009406 | orchestrator | 2025-05-19 21:55:40 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:40.010355 | orchestrator | 2025-05-19 21:55:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:43.065175 | orchestrator | 2025-05-19 21:55:43 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:43.065304 | orchestrator | 2025-05-19 21:55:43 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:43.066912 | orchestrator | 2025-05-19 21:55:43 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:43.067997 | orchestrator | 2025-05-19 21:55:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:46.110742 | orchestrator | 2025-05-19 21:55:46 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:46.112542 | orchestrator | 2025-05-19 21:55:46 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:46.114097 | orchestrator | 2025-05-19 21:55:46 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:46.114130 | orchestrator | 2025-05-19 21:55:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:49.177071 | orchestrator | 2025-05-19 21:55:49 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:49.178747 | orchestrator | 2025-05-19 21:55:49 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:49.180390 | orchestrator | 2025-05-19 21:55:49 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:49.180467 | orchestrator | 2025-05-19 21:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:52.231681 | orchestrator | 2025-05-19 21:55:52 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:52.235482 | orchestrator | 2025-05-19 21:55:52 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:52.239006 | orchestrator | 2025-05-19 21:55:52 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:52.239062 | orchestrator | 2025-05-19 21:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:55.288788 | orchestrator | 2025-05-19 21:55:55 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:55.290701 | orchestrator | 2025-05-19 21:55:55 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:55.293058 | orchestrator | 2025-05-19 21:55:55 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:55.293423 | orchestrator | 2025-05-19 21:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:55:58.347531 | orchestrator | 2025-05-19 21:55:58 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:55:58.348635 | orchestrator | 2025-05-19 21:55:58 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:55:58.350146 | orchestrator | 2025-05-19 21:55:58 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:55:58.350295 | orchestrator | 2025-05-19 21:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:01.395578 | orchestrator | 2025-05-19 21:56:01 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:01.396304 | orchestrator | 2025-05-19 21:56:01 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:56:01.397295 | orchestrator | 2025-05-19 21:56:01 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:01.397326 | orchestrator | 2025-05-19 21:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:04.457168 | orchestrator | 2025-05-19 21:56:04 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:04.458432 | orchestrator | 2025-05-19 21:56:04 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:56:04.459789 | orchestrator | 2025-05-19 21:56:04 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:04.459822 | orchestrator | 2025-05-19 21:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:07.507457 | orchestrator | 2025-05-19 21:56:07 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:07.509297 | orchestrator | 2025-05-19 21:56:07 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state STARTED 2025-05-19 21:56:07.510701 | orchestrator | 2025-05-19 21:56:07 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:07.510780 | orchestrator | 2025-05-19 21:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:10.565577 | orchestrator | 2025-05-19 21:56:10 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:10.573402 | orchestrator | 2025-05-19 21:56:10 | INFO  | Task aec03a51-faa6-4333-9ffb-f8897d1ac6c9 is in state SUCCESS 2025-05-19 21:56:10.577540 | orchestrator | 2025-05-19 21:56:10.577601 | orchestrator | 2025-05-19 21:56:10.577615 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-19 21:56:10.577628 | orchestrator | 2025-05-19 21:56:10.577639 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-19 21:56:10.577651 | orchestrator | Monday 19 May 2025 21:54:05 +0000 (0:00:00.220) 0:00:00.220 ************ 2025-05-19 21:56:10.577662 | orchestrator | ok: [testbed-manager] 2025-05-19 21:56:10.577674 | orchestrator | 2025-05-19 21:56:10.577685 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-19 21:56:10.577697 | orchestrator | Monday 19 May 2025 21:54:06 +0000 (0:00:00.740) 0:00:00.960 ************ 2025-05-19 21:56:10.577709 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-19 21:56:10.577720 | orchestrator | 2025-05-19 21:56:10.577731 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-19 21:56:10.577742 | orchestrator | Monday 19 May 2025 21:54:07 +0000 (0:00:00.669) 0:00:01.629 ************ 2025-05-19 21:56:10.577753 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.577764 | orchestrator | 2025-05-19 21:56:10.577775 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-19 21:56:10.577786 | orchestrator | Monday 19 May 2025 21:54:08 +0000 (0:00:01.317) 0:00:02.946 ************ 2025-05-19 21:56:10.577803 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-19 21:56:10.577823 | orchestrator | ok: [testbed-manager] 2025-05-19 21:56:10.577841 | orchestrator | 2025-05-19 21:56:10.577862 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-19 21:56:10.577882 | orchestrator | Monday 19 May 2025 21:55:22 +0000 (0:01:13.889) 0:01:16.836 ************ 2025-05-19 21:56:10.577904 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.577922 | orchestrator | 2025-05-19 21:56:10.577937 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:56:10.577949 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:10.577961 | orchestrator | 2025-05-19 21:56:10.577972 | orchestrator | 2025-05-19 21:56:10.577983 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:56:10.577994 | orchestrator | Monday 19 May 2025 21:55:25 +0000 (0:00:03.478) 0:01:20.314 ************ 2025-05-19 21:56:10.578005 | orchestrator | =============================================================================== 2025-05-19 21:56:10.578076 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 73.89s 2025-05-19 21:56:10.578092 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.48s 2025-05-19 21:56:10.578103 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.32s 2025-05-19 21:56:10.578139 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.74s 2025-05-19 21:56:10.578159 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.67s 2025-05-19 21:56:10.578171 | orchestrator | 2025-05-19 21:56:10.578183 | orchestrator | 2025-05-19 21:56:10.578218 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-19 21:56:10.578232 | orchestrator | 2025-05-19 21:56:10.578244 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 21:56:10.578256 | orchestrator | Monday 19 May 2025 21:53:37 +0000 (0:00:00.295) 0:00:00.295 ************ 2025-05-19 21:56:10.578270 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:56:10.578284 | orchestrator | 2025-05-19 21:56:10.578296 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-19 21:56:10.578308 | orchestrator | Monday 19 May 2025 21:53:38 +0000 (0:00:01.326) 0:00:01.621 ************ 2025-05-19 21:56:10.578320 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578333 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578345 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578357 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578370 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578382 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578394 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578405 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578418 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578430 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578444 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-19 21:56:10.578456 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578470 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578482 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578494 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578504 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578532 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-19 21:56:10.578543 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578555 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578565 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578576 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-19 21:56:10.578587 | orchestrator | 2025-05-19 21:56:10.578598 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-19 21:56:10.578609 | orchestrator | Monday 19 May 2025 21:53:43 +0000 (0:00:04.297) 0:00:05.919 ************ 2025-05-19 21:56:10.578620 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:56:10.578632 | orchestrator | 2025-05-19 21:56:10.578651 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-19 21:56:10.578662 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:01.140) 0:00:07.059 ************ 2025-05-19 21:56:10.578678 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578736 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.578845 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.578990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.579002 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.579013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.579025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.579036 | orchestrator | 2025-05-19 21:56:10.579047 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-19 21:56:10.579058 | orchestrator | Monday 19 May 2025 21:53:49 +0000 (0:00:05.033) 0:00:12.093 ************ 2025-05-19 21:56:10.579084 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579117 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579128 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:56:10.579146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579185 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:56:10.579245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579325 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:56:10.579337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579399 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:56:10.579410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579422 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:56:10.579440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579483 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:56:10.579495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579533 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:56:10.579545 | orchestrator | 2025-05-19 21:56:10.579556 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-19 21:56:10.579567 | orchestrator | Monday 19 May 2025 21:53:50 +0000 (0:00:01.168) 0:00:13.261 ************ 2025-05-19 21:56:10.579578 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579613 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579625 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:56:10.579637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:56:10.579686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579727 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:56:10.579738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579780 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:56:10.579791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579845 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:56:10.579856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579921 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:56:10.579941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-19 21:56:10.579963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.579987 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:56:10.579998 | orchestrator | 2025-05-19 21:56:10.580009 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-19 21:56:10.580020 | orchestrator | Monday 19 May 2025 21:53:52 +0000 (0:00:02.395) 0:00:15.657 ************ 2025-05-19 21:56:10.580031 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:56:10.580042 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:56:10.580053 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:56:10.580064 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:56:10.580075 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:56:10.580091 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:56:10.580102 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:56:10.580113 | orchestrator | 2025-05-19 21:56:10.580131 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-19 21:56:10.580142 | orchestrator | Monday 19 May 2025 21:53:53 +0000 (0:00:00.678) 0:00:16.336 ************ 2025-05-19 21:56:10.580153 | orchestrator | skipping: [testbed-manager] 2025-05-19 21:56:10.580164 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:56:10.580175 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:56:10.580186 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:56:10.580229 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:56:10.580242 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:56:10.580252 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:56:10.580263 | orchestrator | 2025-05-19 21:56:10.580274 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-19 21:56:10.580285 | orchestrator | Monday 19 May 2025 21:53:54 +0000 (0:00:01.072) 0:00:17.408 ************ 2025-05-19 21:56:10.580296 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580329 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580364 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580414 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.580444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580574 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.580602 | orchestrator | 2025-05-19 21:56:10.580613 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-19 21:56:10.580624 | orchestrator | Monday 19 May 2025 21:53:59 +0000 (0:00:05.386) 0:00:22.795 ************ 2025-05-19 21:56:10.580635 | orchestrator | [WARNING]: Skipped 2025-05-19 21:56:10.580647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-19 21:56:10.580658 | orchestrator | to this access issue: 2025-05-19 21:56:10.580669 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-19 21:56:10.580680 | orchestrator | directory 2025-05-19 21:56:10.580691 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:56:10.580703 | orchestrator | 2025-05-19 21:56:10.580713 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-19 21:56:10.580724 | orchestrator | Monday 19 May 2025 21:54:01 +0000 (0:00:01.593) 0:00:24.389 ************ 2025-05-19 21:56:10.580736 | orchestrator | [WARNING]: Skipped 2025-05-19 21:56:10.580746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-19 21:56:10.580762 | orchestrator | to this access issue: 2025-05-19 21:56:10.580773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-19 21:56:10.580784 | orchestrator | directory 2025-05-19 21:56:10.580795 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:56:10.580806 | orchestrator | 2025-05-19 21:56:10.580817 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-19 21:56:10.580828 | orchestrator | Monday 19 May 2025 21:54:02 +0000 (0:00:01.123) 0:00:25.512 ************ 2025-05-19 21:56:10.580839 | orchestrator | [WARNING]: Skipped 2025-05-19 21:56:10.580850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-19 21:56:10.580860 | orchestrator | to this access issue: 2025-05-19 21:56:10.580871 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-19 21:56:10.580882 | orchestrator | directory 2025-05-19 21:56:10.580893 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:56:10.580904 | orchestrator | 2025-05-19 21:56:10.580915 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-19 21:56:10.580927 | orchestrator | Monday 19 May 2025 21:54:03 +0000 (0:00:01.171) 0:00:26.683 ************ 2025-05-19 21:56:10.580947 | orchestrator | [WARNING]: Skipped 2025-05-19 21:56:10.580968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-19 21:56:10.580990 | orchestrator | to this access issue: 2025-05-19 21:56:10.581011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-19 21:56:10.581030 | orchestrator | directory 2025-05-19 21:56:10.581041 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 21:56:10.581052 | orchestrator | 2025-05-19 21:56:10.581063 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-19 21:56:10.581073 | orchestrator | Monday 19 May 2025 21:54:04 +0000 (0:00:00.988) 0:00:27.672 ************ 2025-05-19 21:56:10.581084 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.581095 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.581106 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.581116 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.581127 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.581138 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.581148 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.581159 | orchestrator | 2025-05-19 21:56:10.581170 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-19 21:56:10.581181 | orchestrator | Monday 19 May 2025 21:54:08 +0000 (0:00:03.369) 0:00:31.041 ************ 2025-05-19 21:56:10.581192 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581236 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581248 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581266 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581278 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581289 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581300 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-19 21:56:10.581311 | orchestrator | 2025-05-19 21:56:10.581322 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-19 21:56:10.581333 | orchestrator | Monday 19 May 2025 21:54:11 +0000 (0:00:03.495) 0:00:34.537 ************ 2025-05-19 21:56:10.581344 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.581355 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.581366 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.581377 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.581387 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.581398 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.581409 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.581420 | orchestrator | 2025-05-19 21:56:10.581431 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-19 21:56:10.581442 | orchestrator | Monday 19 May 2025 21:54:14 +0000 (0:00:02.702) 0:00:37.239 ************ 2025-05-19 21:56:10.581454 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581483 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581513 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581532 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581556 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581568 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581583 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581606 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581642 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581693 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581705 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.581716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 21:56:10.581733 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.581745 | orchestrator | 2025-05-19 21:56:10.581756 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-19 21:56:10.581767 | orchestrator | Monday 19 May 2025 21:54:17 +0000 (0:00:02.863) 0:00:40.103 ************ 2025-05-19 21:56:10.581778 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581789 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581820 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581832 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581843 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581853 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-19 21:56:10.581864 | orchestrator | 2025-05-19 21:56:10.581875 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-19 21:56:10.581886 | orchestrator | Monday 19 May 2025 21:54:19 +0000 (0:00:02.592) 0:00:42.695 ************ 2025-05-19 21:56:10.581897 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581919 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581930 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581941 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581952 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581966 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-19 21:56:10.581985 | orchestrator | 2025-05-19 21:56:10.582003 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-19 21:56:10.582064 | orchestrator | Monday 19 May 2025 21:54:22 +0000 (0:00:02.243) 0:00:44.938 ************ 2025-05-19 21:56:10.582083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582100 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582151 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582256 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-19 21:56:10.582309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582332 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582366 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 21:56:10.582411 | orchestrator | 2025-05-19 21:56:10.582429 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-19 21:56:10.582440 | orchestrator | Monday 19 May 2025 21:54:26 +0000 (0:00:03.872) 0:00:48.811 ************ 2025-05-19 21:56:10.582451 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.582462 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.582473 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.582484 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.582495 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.582505 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.582516 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.582527 | orchestrator | 2025-05-19 21:56:10.582538 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-19 21:56:10.582549 | orchestrator | Monday 19 May 2025 21:54:27 +0000 (0:00:01.597) 0:00:50.408 ************ 2025-05-19 21:56:10.582560 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.582571 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.582582 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.582592 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.582603 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.582614 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.582624 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.582635 | orchestrator | 2025-05-19 21:56:10.582652 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582663 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:01.515) 0:00:51.923 ************ 2025-05-19 21:56:10.582674 | orchestrator | 2025-05-19 21:56:10.582685 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582695 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.077) 0:00:52.001 ************ 2025-05-19 21:56:10.582706 | orchestrator | 2025-05-19 21:56:10.582717 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582728 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.078) 0:00:52.079 ************ 2025-05-19 21:56:10.582739 | orchestrator | 2025-05-19 21:56:10.582749 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582760 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.226) 0:00:52.305 ************ 2025-05-19 21:56:10.582771 | orchestrator | 2025-05-19 21:56:10.582782 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582793 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.068) 0:00:52.374 ************ 2025-05-19 21:56:10.582804 | orchestrator | 2025-05-19 21:56:10.582815 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582826 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.098) 0:00:52.472 ************ 2025-05-19 21:56:10.582836 | orchestrator | 2025-05-19 21:56:10.582847 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-19 21:56:10.582858 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.090) 0:00:52.563 ************ 2025-05-19 21:56:10.582869 | orchestrator | 2025-05-19 21:56:10.582880 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-19 21:56:10.582891 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:00.140) 0:00:52.704 ************ 2025-05-19 21:56:10.582902 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.582912 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.582923 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.582934 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.582945 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.582956 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.582966 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.582977 | orchestrator | 2025-05-19 21:56:10.582988 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-19 21:56:10.583001 | orchestrator | Monday 19 May 2025 21:55:10 +0000 (0:00:40.656) 0:01:33.360 ************ 2025-05-19 21:56:10.583020 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.583040 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.583060 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.583079 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.583098 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.583110 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.583121 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.583131 | orchestrator | 2025-05-19 21:56:10.583142 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-19 21:56:10.583153 | orchestrator | Monday 19 May 2025 21:55:58 +0000 (0:00:48.116) 0:02:21.477 ************ 2025-05-19 21:56:10.583164 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:56:10.583175 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:56:10.583185 | orchestrator | ok: [testbed-manager] 2025-05-19 21:56:10.583251 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:56:10.583265 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:56:10.583276 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:56:10.583287 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:56:10.583297 | orchestrator | 2025-05-19 21:56:10.583309 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-19 21:56:10.583319 | orchestrator | Monday 19 May 2025 21:56:01 +0000 (0:00:02.330) 0:02:23.808 ************ 2025-05-19 21:56:10.583338 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:10.583349 | orchestrator | changed: [testbed-manager] 2025-05-19 21:56:10.583360 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:10.583370 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:10.583381 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:56:10.583392 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:56:10.583402 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:56:10.583413 | orchestrator | 2025-05-19 21:56:10.583424 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:56:10.583436 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583447 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583466 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583477 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583488 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583499 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583510 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-19 21:56:10.583520 | orchestrator | 2025-05-19 21:56:10.583531 | orchestrator | 2025-05-19 21:56:10.583542 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:56:10.583551 | orchestrator | Monday 19 May 2025 21:56:10 +0000 (0:00:09.001) 0:02:32.809 ************ 2025-05-19 21:56:10.583561 | orchestrator | =============================================================================== 2025-05-19 21:56:10.583571 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.12s 2025-05-19 21:56:10.583580 | orchestrator | common : Restart fluentd container ------------------------------------- 40.66s 2025-05-19 21:56:10.583590 | orchestrator | common : Restart cron container ----------------------------------------- 9.00s 2025-05-19 21:56:10.583599 | orchestrator | common : Copying over config.json files for services -------------------- 5.39s 2025-05-19 21:56:10.583609 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.03s 2025-05-19 21:56:10.583623 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.30s 2025-05-19 21:56:10.583640 | orchestrator | common : Check common containers ---------------------------------------- 3.87s 2025-05-19 21:56:10.583655 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.50s 2025-05-19 21:56:10.583671 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.37s 2025-05-19 21:56:10.583689 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.86s 2025-05-19 21:56:10.583700 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.70s 2025-05-19 21:56:10.583709 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.59s 2025-05-19 21:56:10.583725 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.40s 2025-05-19 21:56:10.583738 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.33s 2025-05-19 21:56:10.583748 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.24s 2025-05-19 21:56:10.583758 | orchestrator | common : Creating log volume -------------------------------------------- 1.60s 2025-05-19 21:56:10.583768 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.59s 2025-05-19 21:56:10.583783 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.52s 2025-05-19 21:56:10.583793 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2025-05-19 21:56:10.583802 | orchestrator | common : Find custom fluentd format config files ------------------------ 1.17s 2025-05-19 21:56:10.583812 | orchestrator | 2025-05-19 21:56:10 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:10.583822 | orchestrator | 2025-05-19 21:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:13.632464 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:13.632741 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state STARTED 2025-05-19 21:56:13.633218 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:13.634009 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:13.634861 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:13.638614 | orchestrator | 2025-05-19 21:56:13 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:13.638666 | orchestrator | 2025-05-19 21:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:16.699244 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:16.699320 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state STARTED 2025-05-19 21:56:16.699334 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:16.699507 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:16.700077 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:16.702321 | orchestrator | 2025-05-19 21:56:16 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:16.702353 | orchestrator | 2025-05-19 21:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:19.750467 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:19.750564 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state STARTED 2025-05-19 21:56:19.751576 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:19.751606 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:19.752075 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:19.755559 | orchestrator | 2025-05-19 21:56:19 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:19.755594 | orchestrator | 2025-05-19 21:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:22.790586 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:22.790795 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state STARTED 2025-05-19 21:56:22.791809 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:22.792656 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:22.793157 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:22.793991 | orchestrator | 2025-05-19 21:56:22 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:22.794067 | orchestrator | 2025-05-19 21:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:25.828173 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:25.828394 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state STARTED 2025-05-19 21:56:25.828432 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:25.828452 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:25.828567 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:25.829096 | orchestrator | 2025-05-19 21:56:25 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:25.830857 | orchestrator | 2025-05-19 21:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:28.869129 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:28.871461 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task f71ef190-961f-4802-84bc-380110316823 is in state SUCCESS 2025-05-19 21:56:28.871984 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:28.872734 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:28.874875 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:28.878243 | orchestrator | 2025-05-19 21:56:28 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:28.878324 | orchestrator | 2025-05-19 21:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:31.942381 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:31.943903 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:31.946870 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:31.946970 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:31.948611 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:31.948637 | orchestrator | 2025-05-19 21:56:31 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:31.948649 | orchestrator | 2025-05-19 21:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:35.004263 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:35.004495 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:35.005166 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:35.007958 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:35.009718 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:35.010579 | orchestrator | 2025-05-19 21:56:35 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:35.010695 | orchestrator | 2025-05-19 21:56:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:38.094591 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state STARTED 2025-05-19 21:56:38.094839 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:38.094920 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:38.094991 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:38.095064 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:38.095161 | orchestrator | 2025-05-19 21:56:38 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:38.095231 | orchestrator | 2025-05-19 21:56:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:41.127036 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task f9c759e2-5c40-413c-a320-93cd251bd45b is in state SUCCESS 2025-05-19 21:56:41.127876 | orchestrator | 2025-05-19 21:56:41.127901 | orchestrator | 2025-05-19 21:56:41.127922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:56:41.127932 | orchestrator | 2025-05-19 21:56:41.127940 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 21:56:41.127949 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.585) 0:00:00.585 ************ 2025-05-19 21:56:41.127957 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:56:41.127966 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:56:41.127974 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:56:41.127982 | orchestrator | 2025-05-19 21:56:41.127990 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 21:56:41.127998 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:00.633) 0:00:01.218 ************ 2025-05-19 21:56:41.128007 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-19 21:56:41.128015 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-19 21:56:41.128023 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-19 21:56:41.128030 | orchestrator | 2025-05-19 21:56:41.128038 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-19 21:56:41.128046 | orchestrator | 2025-05-19 21:56:41.128054 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-19 21:56:41.128062 | orchestrator | Monday 19 May 2025 21:56:18 +0000 (0:00:00.670) 0:00:01.889 ************ 2025-05-19 21:56:41.128071 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:56:41.128079 | orchestrator | 2025-05-19 21:56:41.128087 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-19 21:56:41.128095 | orchestrator | Monday 19 May 2025 21:56:19 +0000 (0:00:01.044) 0:00:02.933 ************ 2025-05-19 21:56:41.128103 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 21:56:41.128111 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 21:56:41.128119 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 21:56:41.128127 | orchestrator | 2025-05-19 21:56:41.128135 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-19 21:56:41.128158 | orchestrator | Monday 19 May 2025 21:56:20 +0000 (0:00:00.777) 0:00:03.710 ************ 2025-05-19 21:56:41.128166 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-19 21:56:41.128198 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-19 21:56:41.128206 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-19 21:56:41.128214 | orchestrator | 2025-05-19 21:56:41.128222 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-19 21:56:41.128230 | orchestrator | Monday 19 May 2025 21:56:22 +0000 (0:00:02.752) 0:00:06.462 ************ 2025-05-19 21:56:41.128238 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:41.128246 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:41.128254 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:41.128262 | orchestrator | 2025-05-19 21:56:41.128270 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-19 21:56:41.128278 | orchestrator | Monday 19 May 2025 21:56:25 +0000 (0:00:02.770) 0:00:09.233 ************ 2025-05-19 21:56:41.128286 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:41.128294 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:41.128302 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:41.128310 | orchestrator | 2025-05-19 21:56:41.128318 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:56:41.128326 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.128335 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.128343 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.128351 | orchestrator | 2025-05-19 21:56:41.128359 | orchestrator | 2025-05-19 21:56:41.128367 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:56:41.128375 | orchestrator | Monday 19 May 2025 21:56:28 +0000 (0:00:02.708) 0:00:11.942 ************ 2025-05-19 21:56:41.128383 | orchestrator | =============================================================================== 2025-05-19 21:56:41.128391 | orchestrator | memcached : Check memcached container ----------------------------------- 2.77s 2025-05-19 21:56:41.128399 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.75s 2025-05-19 21:56:41.128407 | orchestrator | memcached : Restart memcached container --------------------------------- 2.71s 2025-05-19 21:56:41.128415 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.04s 2025-05-19 21:56:41.128423 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2025-05-19 21:56:41.128431 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2025-05-19 21:56:41.128438 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.63s 2025-05-19 21:56:41.128446 | orchestrator | 2025-05-19 21:56:41.128454 | orchestrator | 2025-05-19 21:56:41.128462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:56:41.128470 | orchestrator | 2025-05-19 21:56:41.128478 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 21:56:41.128486 | orchestrator | Monday 19 May 2025 21:56:15 +0000 (0:00:00.512) 0:00:00.512 ************ 2025-05-19 21:56:41.128495 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:56:41.128504 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:56:41.128513 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:56:41.128522 | orchestrator | 2025-05-19 21:56:41.128531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 21:56:41.128549 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.485) 0:00:00.998 ************ 2025-05-19 21:56:41.128563 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-19 21:56:41.128573 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-19 21:56:41.128614 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-19 21:56:41.128626 | orchestrator | 2025-05-19 21:56:41.128635 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-19 21:56:41.128644 | orchestrator | 2025-05-19 21:56:41.128653 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-19 21:56:41.128661 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:01.257) 0:00:02.256 ************ 2025-05-19 21:56:41.128670 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:56:41.128679 | orchestrator | 2025-05-19 21:56:41.128688 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-19 21:56:41.128697 | orchestrator | Monday 19 May 2025 21:56:18 +0000 (0:00:00.934) 0:00:03.190 ************ 2025-05-19 21:56:41.128708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128787 | orchestrator | 2025-05-19 21:56:41.128796 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-19 21:56:41.128805 | orchestrator | Monday 19 May 2025 21:56:19 +0000 (0:00:01.420) 0:00:04.611 ************ 2025-05-19 21:56:41.128815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128884 | orchestrator | 2025-05-19 21:56:41.128892 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-19 21:56:41.128900 | orchestrator | Monday 19 May 2025 21:56:22 +0000 (0:00:03.220) 0:00:07.832 ************ 2025-05-19 21:56:41.128908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.128963 | orchestrator | 2025-05-19 21:56:41.128975 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-19 21:56:41.128988 | orchestrator | Monday 19 May 2025 21:56:26 +0000 (0:00:03.752) 0:00:11.584 ************ 2025-05-19 21:56:41.128996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-19 21:56:41.129052 | orchestrator | 2025-05-19 21:56:41.129060 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 21:56:41.129068 | orchestrator | Monday 19 May 2025 21:56:28 +0000 (0:00:02.260) 0:00:13.844 ************ 2025-05-19 21:56:41.129076 | orchestrator | 2025-05-19 21:56:41.129084 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 21:56:41.129099 | orchestrator | Monday 19 May 2025 21:56:29 +0000 (0:00:00.414) 0:00:14.259 ************ 2025-05-19 21:56:41.129107 | orchestrator | 2025-05-19 21:56:41.129115 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-19 21:56:41.129123 | orchestrator | Monday 19 May 2025 21:56:29 +0000 (0:00:00.251) 0:00:14.510 ************ 2025-05-19 21:56:41.129131 | orchestrator | 2025-05-19 21:56:41.129139 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-19 21:56:41.129147 | orchestrator | Monday 19 May 2025 21:56:29 +0000 (0:00:00.230) 0:00:14.740 ************ 2025-05-19 21:56:41.129155 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:41.129163 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:41.129184 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:41.129193 | orchestrator | 2025-05-19 21:56:41.129201 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-19 21:56:41.129209 | orchestrator | Monday 19 May 2025 21:56:35 +0000 (0:00:05.634) 0:00:20.375 ************ 2025-05-19 21:56:41.129216 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:56:41.129225 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:56:41.129232 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:56:41.129240 | orchestrator | 2025-05-19 21:56:41.129248 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:56:41.129256 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.129265 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.129273 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:56:41.129281 | orchestrator | 2025-05-19 21:56:41.129289 | orchestrator | 2025-05-19 21:56:41.129297 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:56:41.129304 | orchestrator | Monday 19 May 2025 21:56:40 +0000 (0:00:04.911) 0:00:25.287 ************ 2025-05-19 21:56:41.129312 | orchestrator | =============================================================================== 2025-05-19 21:56:41.129320 | orchestrator | redis : Restart redis container ----------------------------------------- 5.63s 2025-05-19 21:56:41.129328 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.91s 2025-05-19 21:56:41.129336 | orchestrator | redis : Copying over redis config files --------------------------------- 3.75s 2025-05-19 21:56:41.129344 | orchestrator | redis : Copying over default config.json files -------------------------- 3.22s 2025-05-19 21:56:41.129352 | orchestrator | redis : Check redis containers ------------------------------------------ 2.26s 2025-05-19 21:56:41.129365 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.42s 2025-05-19 21:56:41.129373 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2025-05-19 21:56:41.129381 | orchestrator | redis : include_tasks --------------------------------------------------- 0.93s 2025-05-19 21:56:41.129389 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.90s 2025-05-19 21:56:41.129397 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2025-05-19 21:56:41.129405 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:41.129465 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:41.129678 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:41.130418 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:41.131359 | orchestrator | 2025-05-19 21:56:41 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:41.131506 | orchestrator | 2025-05-19 21:56:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:44.189556 | orchestrator | 2025-05-19 21:56:44 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:44.189896 | orchestrator | 2025-05-19 21:56:44 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:44.191709 | orchestrator | 2025-05-19 21:56:44 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:44.195932 | orchestrator | 2025-05-19 21:56:44 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:44.199683 | orchestrator | 2025-05-19 21:56:44 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:44.199728 | orchestrator | 2025-05-19 21:56:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:47.269461 | orchestrator | 2025-05-19 21:56:47 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:47.269870 | orchestrator | 2025-05-19 21:56:47 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:47.270923 | orchestrator | 2025-05-19 21:56:47 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:47.272245 | orchestrator | 2025-05-19 21:56:47 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:47.273379 | orchestrator | 2025-05-19 21:56:47 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:47.275797 | orchestrator | 2025-05-19 21:56:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:50.320611 | orchestrator | 2025-05-19 21:56:50 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:50.322466 | orchestrator | 2025-05-19 21:56:50 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:50.324513 | orchestrator | 2025-05-19 21:56:50 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:50.326331 | orchestrator | 2025-05-19 21:56:50 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:50.328244 | orchestrator | 2025-05-19 21:56:50 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:50.328268 | orchestrator | 2025-05-19 21:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:53.373966 | orchestrator | 2025-05-19 21:56:53 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:53.374676 | orchestrator | 2025-05-19 21:56:53 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:53.377274 | orchestrator | 2025-05-19 21:56:53 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:53.378461 | orchestrator | 2025-05-19 21:56:53 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:53.378743 | orchestrator | 2025-05-19 21:56:53 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:53.378862 | orchestrator | 2025-05-19 21:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:56.424748 | orchestrator | 2025-05-19 21:56:56 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:56.427644 | orchestrator | 2025-05-19 21:56:56 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:56.431841 | orchestrator | 2025-05-19 21:56:56 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:56.435298 | orchestrator | 2025-05-19 21:56:56 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:56.436298 | orchestrator | 2025-05-19 21:56:56 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:56.436327 | orchestrator | 2025-05-19 21:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:56:59.517704 | orchestrator | 2025-05-19 21:56:59 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:56:59.518217 | orchestrator | 2025-05-19 21:56:59 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:56:59.519683 | orchestrator | 2025-05-19 21:56:59 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:56:59.520826 | orchestrator | 2025-05-19 21:56:59 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:56:59.522345 | orchestrator | 2025-05-19 21:56:59 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:56:59.522388 | orchestrator | 2025-05-19 21:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:02.560332 | orchestrator | 2025-05-19 21:57:02 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:02.560741 | orchestrator | 2025-05-19 21:57:02 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:02.561136 | orchestrator | 2025-05-19 21:57:02 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:02.561985 | orchestrator | 2025-05-19 21:57:02 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:02.562800 | orchestrator | 2025-05-19 21:57:02 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:02.562824 | orchestrator | 2025-05-19 21:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:05.609597 | orchestrator | 2025-05-19 21:57:05 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:05.611650 | orchestrator | 2025-05-19 21:57:05 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:05.613701 | orchestrator | 2025-05-19 21:57:05 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:05.615419 | orchestrator | 2025-05-19 21:57:05 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:05.617234 | orchestrator | 2025-05-19 21:57:05 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:05.617317 | orchestrator | 2025-05-19 21:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:08.663828 | orchestrator | 2025-05-19 21:57:08 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:08.664410 | orchestrator | 2025-05-19 21:57:08 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:08.666344 | orchestrator | 2025-05-19 21:57:08 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:08.667079 | orchestrator | 2025-05-19 21:57:08 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:08.667921 | orchestrator | 2025-05-19 21:57:08 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:08.667947 | orchestrator | 2025-05-19 21:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:11.718935 | orchestrator | 2025-05-19 21:57:11 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:11.724561 | orchestrator | 2025-05-19 21:57:11 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:11.726238 | orchestrator | 2025-05-19 21:57:11 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:11.729149 | orchestrator | 2025-05-19 21:57:11 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:11.732042 | orchestrator | 2025-05-19 21:57:11 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:11.732104 | orchestrator | 2025-05-19 21:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:14.785494 | orchestrator | 2025-05-19 21:57:14 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:14.785775 | orchestrator | 2025-05-19 21:57:14 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:14.786838 | orchestrator | 2025-05-19 21:57:14 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:14.787880 | orchestrator | 2025-05-19 21:57:14 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:14.788756 | orchestrator | 2025-05-19 21:57:14 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:14.788790 | orchestrator | 2025-05-19 21:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:17.830340 | orchestrator | 2025-05-19 21:57:17 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:17.832730 | orchestrator | 2025-05-19 21:57:17 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:17.835822 | orchestrator | 2025-05-19 21:57:17 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:17.836838 | orchestrator | 2025-05-19 21:57:17 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:17.839924 | orchestrator | 2025-05-19 21:57:17 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:17.839976 | orchestrator | 2025-05-19 21:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:20.882568 | orchestrator | 2025-05-19 21:57:20 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:20.883275 | orchestrator | 2025-05-19 21:57:20 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:20.887515 | orchestrator | 2025-05-19 21:57:20 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:20.888291 | orchestrator | 2025-05-19 21:57:20 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:20.888993 | orchestrator | 2025-05-19 21:57:20 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:20.889025 | orchestrator | 2025-05-19 21:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:23.936966 | orchestrator | 2025-05-19 21:57:23 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:23.937081 | orchestrator | 2025-05-19 21:57:23 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:23.937252 | orchestrator | 2025-05-19 21:57:23 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:23.941020 | orchestrator | 2025-05-19 21:57:23 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:23.941340 | orchestrator | 2025-05-19 21:57:23 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:23.941377 | orchestrator | 2025-05-19 21:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:26.994546 | orchestrator | 2025-05-19 21:57:26 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:26.994651 | orchestrator | 2025-05-19 21:57:26 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:26.997069 | orchestrator | 2025-05-19 21:57:26 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:26.998205 | orchestrator | 2025-05-19 21:57:26 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:27.005908 | orchestrator | 2025-05-19 21:57:27 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state STARTED 2025-05-19 21:57:27.005953 | orchestrator | 2025-05-19 21:57:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:30.051948 | orchestrator | 2025-05-19 21:57:30 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:30.053895 | orchestrator | 2025-05-19 21:57:30 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:30.055235 | orchestrator | 2025-05-19 21:57:30 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:30.056779 | orchestrator | 2025-05-19 21:57:30 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:30.058821 | orchestrator | 2025-05-19 21:57:30.058873 | orchestrator | 2025-05-19 21:57:30 | INFO  | Task 800b5a96-ada9-4ec1-a6ea-882f549edba8 is in state SUCCESS 2025-05-19 21:57:30.060545 | orchestrator | 2025-05-19 21:57:30.060574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:57:30.060586 | orchestrator | 2025-05-19 21:57:30.060597 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 21:57:30.060609 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.485) 0:00:00.485 ************ 2025-05-19 21:57:30.060621 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:57:30.060633 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:57:30.060643 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:57:30.060654 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:57:30.060665 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:57:30.060677 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:57:30.060688 | orchestrator | 2025-05-19 21:57:30.060699 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 21:57:30.060711 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:01.458) 0:00:01.944 ************ 2025-05-19 21:57:30.060722 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060763 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060775 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060786 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060797 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060807 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-19 21:57:30.060818 | orchestrator | 2025-05-19 21:57:30.060829 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-19 21:57:30.060840 | orchestrator | 2025-05-19 21:57:30.060850 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-19 21:57:30.060861 | orchestrator | Monday 19 May 2025 21:56:18 +0000 (0:00:01.138) 0:00:03.083 ************ 2025-05-19 21:57:30.060873 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:57:30.060886 | orchestrator | 2025-05-19 21:57:30.060896 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 21:57:30.060907 | orchestrator | Monday 19 May 2025 21:56:20 +0000 (0:00:01.424) 0:00:04.508 ************ 2025-05-19 21:57:30.060918 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 21:57:30.060929 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 21:57:30.060940 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 21:57:30.060950 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 21:57:30.060961 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 21:57:30.060972 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 21:57:30.060983 | orchestrator | 2025-05-19 21:57:30.060993 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 21:57:30.061004 | orchestrator | Monday 19 May 2025 21:56:21 +0000 (0:00:01.861) 0:00:06.369 ************ 2025-05-19 21:57:30.061015 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-19 21:57:30.061026 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-19 21:57:30.061051 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-19 21:57:30.061063 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-19 21:57:30.061073 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-19 21:57:30.061084 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-19 21:57:30.061095 | orchestrator | 2025-05-19 21:57:30.061105 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 21:57:30.061116 | orchestrator | Monday 19 May 2025 21:56:23 +0000 (0:00:02.023) 0:00:08.393 ************ 2025-05-19 21:57:30.061127 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-19 21:57:30.061138 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:57:30.061198 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-19 21:57:30.061211 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:57:30.061223 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-19 21:57:30.061236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:57:30.061248 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-19 21:57:30.061260 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:57:30.061272 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-19 21:57:30.061284 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:57:30.061296 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-19 21:57:30.061309 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:57:30.061320 | orchestrator | 2025-05-19 21:57:30.061330 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-19 21:57:30.061341 | orchestrator | Monday 19 May 2025 21:56:25 +0000 (0:00:01.998) 0:00:10.391 ************ 2025-05-19 21:57:30.061362 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:57:30.061373 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:57:30.061383 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:57:30.061394 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:57:30.061405 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:57:30.061415 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:57:30.061426 | orchestrator | 2025-05-19 21:57:30.061437 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-19 21:57:30.061447 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:01.516) 0:00:11.908 ************ 2025-05-19 21:57:30.061478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061526 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061663 | orchestrator | 2025-05-19 21:57:30.061674 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-19 21:57:30.061685 | orchestrator | Monday 19 May 2025 21:56:30 +0000 (0:00:02.739) 0:00:14.648 ************ 2025-05-19 21:57:30.061697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061823 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.061884 | orchestrator | 2025-05-19 21:57:30.061895 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-19 21:57:30.061906 | orchestrator | Monday 19 May 2025 21:56:35 +0000 (0:00:05.336) 0:00:19.984 ************ 2025-05-19 21:57:30.061917 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:57:30.061928 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:57:30.061939 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:57:30.061950 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:57:30.061961 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:57:30.061971 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:57:30.061982 | orchestrator | 2025-05-19 21:57:30.061993 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-19 21:57:30.062004 | orchestrator | Monday 19 May 2025 21:56:38 +0000 (0:00:02.660) 0:00:22.645 ************ 2025-05-19 21:57:30.062015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062126 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-19 21:57:30.062302 | orchestrator | 2025-05-19 21:57:30.062313 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062324 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:02.997) 0:00:25.642 ************ 2025-05-19 21:57:30.062335 | orchestrator | 2025-05-19 21:57:30.062346 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062357 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:00.153) 0:00:25.796 ************ 2025-05-19 21:57:30.062374 | orchestrator | 2025-05-19 21:57:30.062385 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062396 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:00.134) 0:00:25.930 ************ 2025-05-19 21:57:30.062406 | orchestrator | 2025-05-19 21:57:30.062417 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062428 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:00.149) 0:00:26.079 ************ 2025-05-19 21:57:30.062438 | orchestrator | 2025-05-19 21:57:30.062449 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062460 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:00.162) 0:00:26.242 ************ 2025-05-19 21:57:30.062471 | orchestrator | 2025-05-19 21:57:30.062481 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-19 21:57:30.062492 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.272) 0:00:26.514 ************ 2025-05-19 21:57:30.062503 | orchestrator | 2025-05-19 21:57:30.062514 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-19 21:57:30.062524 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.387) 0:00:26.902 ************ 2025-05-19 21:57:30.062540 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:57:30.062551 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:57:30.062562 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:57:30.062573 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:57:30.062584 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:57:30.062595 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:57:30.062605 | orchestrator | 2025-05-19 21:57:30.062616 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-19 21:57:30.062627 | orchestrator | Monday 19 May 2025 21:56:53 +0000 (0:00:11.244) 0:00:38.147 ************ 2025-05-19 21:57:30.062638 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:57:30.062649 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:57:30.062659 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:57:30.062670 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:57:30.062681 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:57:30.062691 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:57:30.062702 | orchestrator | 2025-05-19 21:57:30.062713 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 21:57:30.062723 | orchestrator | Monday 19 May 2025 21:56:55 +0000 (0:00:02.100) 0:00:40.247 ************ 2025-05-19 21:57:30.062734 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:57:30.062745 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:57:30.062756 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:57:30.062766 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:57:30.062777 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:57:30.062788 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:57:30.062798 | orchestrator | 2025-05-19 21:57:30.062809 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-19 21:57:30.062820 | orchestrator | Monday 19 May 2025 21:57:05 +0000 (0:00:09.889) 0:00:50.136 ************ 2025-05-19 21:57:30.062831 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-19 21:57:30.062842 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-19 21:57:30.062853 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-19 21:57:30.062864 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-19 21:57:30.062875 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-19 21:57:30.062891 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-19 21:57:30.062909 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-19 21:57:30.062920 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-19 21:57:30.062932 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-19 21:57:30.062942 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-19 21:57:30.062953 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-19 21:57:30.062964 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-19 21:57:30.062975 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.062986 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.062996 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.063007 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.063018 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.063028 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-19 21:57:30.063039 | orchestrator | 2025-05-19 21:57:30.063050 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-19 21:57:30.063061 | orchestrator | Monday 19 May 2025 21:57:14 +0000 (0:00:08.497) 0:00:58.634 ************ 2025-05-19 21:57:30.063072 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-19 21:57:30.063083 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:57:30.063093 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-19 21:57:30.063105 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:57:30.063115 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-19 21:57:30.063126 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:57:30.063137 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-19 21:57:30.063175 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-19 21:57:30.063186 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-19 21:57:30.063197 | orchestrator | 2025-05-19 21:57:30.063207 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-19 21:57:30.063218 | orchestrator | Monday 19 May 2025 21:57:16 +0000 (0:00:02.400) 0:01:01.034 ************ 2025-05-19 21:57:30.063234 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-19 21:57:30.063245 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:57:30.063256 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-19 21:57:30.063267 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:57:30.063278 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-19 21:57:30.063288 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:57:30.063299 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-19 21:57:30.063310 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-19 21:57:30.063321 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-19 21:57:30.063331 | orchestrator | 2025-05-19 21:57:30.063342 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-19 21:57:30.063353 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:03.786) 0:01:04.821 ************ 2025-05-19 21:57:30.063363 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:57:30.063374 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:57:30.063392 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:57:30.063403 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:57:30.063414 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:57:30.063424 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:57:30.063435 | orchestrator | 2025-05-19 21:57:30.063446 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:57:30.063457 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 21:57:30.063468 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 21:57:30.063479 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 21:57:30.063490 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:57:30.063501 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:57:30.063518 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 21:57:30.063529 | orchestrator | 2025-05-19 21:57:30.063541 | orchestrator | 2025-05-19 21:57:30.063552 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:57:30.063563 | orchestrator | Monday 19 May 2025 21:57:28 +0000 (0:00:08.328) 0:01:13.149 ************ 2025-05-19 21:57:30.063573 | orchestrator | =============================================================================== 2025-05-19 21:57:30.063584 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.22s 2025-05-19 21:57:30.063595 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.24s 2025-05-19 21:57:30.063605 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.50s 2025-05-19 21:57:30.063616 | orchestrator | openvswitch : Copying over config.json files for services --------------- 5.34s 2025-05-19 21:57:30.063627 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.79s 2025-05-19 21:57:30.063638 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.00s 2025-05-19 21:57:30.063649 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.74s 2025-05-19 21:57:30.063659 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.66s 2025-05-19 21:57:30.063670 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2025-05-19 21:57:30.063681 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.10s 2025-05-19 21:57:30.063691 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.02s 2025-05-19 21:57:30.063702 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.00s 2025-05-19 21:57:30.063713 | orchestrator | module-load : Load modules ---------------------------------------------- 1.86s 2025-05-19 21:57:30.063723 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.52s 2025-05-19 21:57:30.063734 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.46s 2025-05-19 21:57:30.063745 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.42s 2025-05-19 21:57:30.063755 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.26s 2025-05-19 21:57:30.063766 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-05-19 21:57:30.063777 | orchestrator | 2025-05-19 21:57:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:33.110666 | orchestrator | 2025-05-19 21:57:33 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:33.111865 | orchestrator | 2025-05-19 21:57:33 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:33.115219 | orchestrator | 2025-05-19 21:57:33 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:33.115279 | orchestrator | 2025-05-19 21:57:33 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:33.121112 | orchestrator | 2025-05-19 21:57:33 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:33.121205 | orchestrator | 2025-05-19 21:57:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:36.159396 | orchestrator | 2025-05-19 21:57:36 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:36.166288 | orchestrator | 2025-05-19 21:57:36 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:36.166809 | orchestrator | 2025-05-19 21:57:36 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:36.169023 | orchestrator | 2025-05-19 21:57:36 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:36.169062 | orchestrator | 2025-05-19 21:57:36 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:36.169114 | orchestrator | 2025-05-19 21:57:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:39.217418 | orchestrator | 2025-05-19 21:57:39 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:39.217517 | orchestrator | 2025-05-19 21:57:39 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:39.217533 | orchestrator | 2025-05-19 21:57:39 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:39.217545 | orchestrator | 2025-05-19 21:57:39 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:39.217557 | orchestrator | 2025-05-19 21:57:39 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:39.217568 | orchestrator | 2025-05-19 21:57:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:42.258952 | orchestrator | 2025-05-19 21:57:42 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:42.259080 | orchestrator | 2025-05-19 21:57:42 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:42.263724 | orchestrator | 2025-05-19 21:57:42 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:42.263788 | orchestrator | 2025-05-19 21:57:42 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:42.263808 | orchestrator | 2025-05-19 21:57:42 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:42.263820 | orchestrator | 2025-05-19 21:57:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:45.301255 | orchestrator | 2025-05-19 21:57:45 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:45.301677 | orchestrator | 2025-05-19 21:57:45 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:45.302363 | orchestrator | 2025-05-19 21:57:45 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:45.303119 | orchestrator | 2025-05-19 21:57:45 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:45.313175 | orchestrator | 2025-05-19 21:57:45 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:45.313269 | orchestrator | 2025-05-19 21:57:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:48.357066 | orchestrator | 2025-05-19 21:57:48 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:48.357268 | orchestrator | 2025-05-19 21:57:48 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:48.358510 | orchestrator | 2025-05-19 21:57:48 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:48.359328 | orchestrator | 2025-05-19 21:57:48 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:48.361773 | orchestrator | 2025-05-19 21:57:48 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:48.361808 | orchestrator | 2025-05-19 21:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:51.432559 | orchestrator | 2025-05-19 21:57:51 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:51.435187 | orchestrator | 2025-05-19 21:57:51 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:51.436484 | orchestrator | 2025-05-19 21:57:51 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:51.437497 | orchestrator | 2025-05-19 21:57:51 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:51.438788 | orchestrator | 2025-05-19 21:57:51 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:51.438833 | orchestrator | 2025-05-19 21:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:54.514276 | orchestrator | 2025-05-19 21:57:54 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:54.514538 | orchestrator | 2025-05-19 21:57:54 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:54.517655 | orchestrator | 2025-05-19 21:57:54 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:54.518700 | orchestrator | 2025-05-19 21:57:54 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:54.519868 | orchestrator | 2025-05-19 21:57:54 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:54.521103 | orchestrator | 2025-05-19 21:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:57:57.575369 | orchestrator | 2025-05-19 21:57:57 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:57:57.576391 | orchestrator | 2025-05-19 21:57:57 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:57:57.577884 | orchestrator | 2025-05-19 21:57:57 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:57:57.584417 | orchestrator | 2025-05-19 21:57:57 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:57:57.585405 | orchestrator | 2025-05-19 21:57:57 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:57:57.585580 | orchestrator | 2025-05-19 21:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:00.654228 | orchestrator | 2025-05-19 21:58:00 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:00.656022 | orchestrator | 2025-05-19 21:58:00 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:00.659847 | orchestrator | 2025-05-19 21:58:00 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:00.663026 | orchestrator | 2025-05-19 21:58:00 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:00.663085 | orchestrator | 2025-05-19 21:58:00 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:00.663097 | orchestrator | 2025-05-19 21:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:03.713239 | orchestrator | 2025-05-19 21:58:03 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:03.718236 | orchestrator | 2025-05-19 21:58:03 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:03.718460 | orchestrator | 2025-05-19 21:58:03 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:03.721360 | orchestrator | 2025-05-19 21:58:03 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:03.723863 | orchestrator | 2025-05-19 21:58:03 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:03.723927 | orchestrator | 2025-05-19 21:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:06.774978 | orchestrator | 2025-05-19 21:58:06 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:06.776078 | orchestrator | 2025-05-19 21:58:06 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:06.776386 | orchestrator | 2025-05-19 21:58:06 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:06.777489 | orchestrator | 2025-05-19 21:58:06 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:06.779258 | orchestrator | 2025-05-19 21:58:06 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:06.779343 | orchestrator | 2025-05-19 21:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:09.827633 | orchestrator | 2025-05-19 21:58:09 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:09.830529 | orchestrator | 2025-05-19 21:58:09 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:09.831002 | orchestrator | 2025-05-19 21:58:09 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:09.831820 | orchestrator | 2025-05-19 21:58:09 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:09.832370 | orchestrator | 2025-05-19 21:58:09 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:09.832412 | orchestrator | 2025-05-19 21:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:12.875625 | orchestrator | 2025-05-19 21:58:12 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:12.877789 | orchestrator | 2025-05-19 21:58:12 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:12.880247 | orchestrator | 2025-05-19 21:58:12 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:12.881927 | orchestrator | 2025-05-19 21:58:12 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:12.883747 | orchestrator | 2025-05-19 21:58:12 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:12.883783 | orchestrator | 2025-05-19 21:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:15.921730 | orchestrator | 2025-05-19 21:58:15 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:15.924404 | orchestrator | 2025-05-19 21:58:15 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:15.924442 | orchestrator | 2025-05-19 21:58:15 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:15.925166 | orchestrator | 2025-05-19 21:58:15 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:15.926081 | orchestrator | 2025-05-19 21:58:15 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:15.926107 | orchestrator | 2025-05-19 21:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:18.977411 | orchestrator | 2025-05-19 21:58:18 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:18.980833 | orchestrator | 2025-05-19 21:58:18 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:18.982972 | orchestrator | 2025-05-19 21:58:18 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:18.986509 | orchestrator | 2025-05-19 21:58:18 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state STARTED 2025-05-19 21:58:18.989981 | orchestrator | 2025-05-19 21:58:18 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:18.990062 | orchestrator | 2025-05-19 21:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:22.026906 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:22.029463 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:22.030842 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:22.033651 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task 905fd585-86e8-4635-8138-619cc3c8097c is in state SUCCESS 2025-05-19 21:58:22.035039 | orchestrator | 2025-05-19 21:58:22.035079 | orchestrator | 2025-05-19 21:58:22.035091 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-19 21:58:22.035103 | orchestrator | 2025-05-19 21:58:22.035156 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-19 21:58:22.035169 | orchestrator | Monday 19 May 2025 21:53:37 +0000 (0:00:00.228) 0:00:00.228 ************ 2025-05-19 21:58:22.035181 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.035192 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.035203 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.035214 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.035225 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.035328 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.035351 | orchestrator | 2025-05-19 21:58:22.035363 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-19 21:58:22.035374 | orchestrator | Monday 19 May 2025 21:53:38 +0000 (0:00:00.818) 0:00:01.047 ************ 2025-05-19 21:58:22.035385 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.035397 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.035408 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.035419 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.035430 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.035441 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.035451 | orchestrator | 2025-05-19 21:58:22.035462 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-19 21:58:22.035473 | orchestrator | Monday 19 May 2025 21:53:39 +0000 (0:00:00.807) 0:00:01.854 ************ 2025-05-19 21:58:22.035484 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.035495 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.035520 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.035550 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.035562 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.035573 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.035584 | orchestrator | 2025-05-19 21:58:22.035595 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-19 21:58:22.035606 | orchestrator | Monday 19 May 2025 21:53:40 +0000 (0:00:00.898) 0:00:02.752 ************ 2025-05-19 21:58:22.035617 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.035629 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.035642 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.035655 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.035667 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.035680 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.035692 | orchestrator | 2025-05-19 21:58:22.035714 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-19 21:58:22.035734 | orchestrator | Monday 19 May 2025 21:53:42 +0000 (0:00:02.191) 0:00:04.943 ************ 2025-05-19 21:58:22.035755 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.035776 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.035797 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.035819 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.035840 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.035853 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.035881 | orchestrator | 2025-05-19 21:58:22.035894 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-19 21:58:22.035906 | orchestrator | Monday 19 May 2025 21:53:43 +0000 (0:00:00.971) 0:00:05.915 ************ 2025-05-19 21:58:22.035919 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.035937 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.035957 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.035979 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.036001 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.036056 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.036070 | orchestrator | 2025-05-19 21:58:22.036081 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-19 21:58:22.036092 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:01.005) 0:00:06.920 ************ 2025-05-19 21:58:22.036103 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.036141 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.036153 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.036179 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.036200 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.036219 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.036238 | orchestrator | 2025-05-19 21:58:22.036250 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-19 21:58:22.036261 | orchestrator | Monday 19 May 2025 21:53:45 +0000 (0:00:00.865) 0:00:07.786 ************ 2025-05-19 21:58:22.036277 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.036294 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.036305 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.036316 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.036327 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.036338 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.036348 | orchestrator | 2025-05-19 21:58:22.036359 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-19 21:58:22.036371 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:00.675) 0:00:08.462 ************ 2025-05-19 21:58:22.036382 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036394 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036404 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.036415 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036437 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036448 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.036459 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036470 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036487 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.036507 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036543 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036563 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.036576 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036587 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036598 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.036609 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 21:58:22.036620 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 21:58:22.036630 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.036641 | orchestrator | 2025-05-19 21:58:22.036652 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-19 21:58:22.036663 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:00.886) 0:00:09.349 ************ 2025-05-19 21:58:22.036674 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.036684 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.036695 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.036706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.036717 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.036727 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.036738 | orchestrator | 2025-05-19 21:58:22.036749 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-19 21:58:22.036761 | orchestrator | Monday 19 May 2025 21:53:48 +0000 (0:00:01.319) 0:00:10.669 ************ 2025-05-19 21:58:22.036778 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.036789 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.036800 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.036811 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.036856 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.036867 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.036890 | orchestrator | 2025-05-19 21:58:22.036902 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-19 21:58:22.036914 | orchestrator | Monday 19 May 2025 21:53:48 +0000 (0:00:00.553) 0:00:11.222 ************ 2025-05-19 21:58:22.036947 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.036973 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.036993 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.037012 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.037034 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.037055 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.037074 | orchestrator | 2025-05-19 21:58:22.037085 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-19 21:58:22.037096 | orchestrator | Monday 19 May 2025 21:53:54 +0000 (0:00:05.879) 0:00:17.101 ************ 2025-05-19 21:58:22.037107 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.037151 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.037163 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.037173 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.037184 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.037195 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.037206 | orchestrator | 2025-05-19 21:58:22.037217 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-19 21:58:22.037237 | orchestrator | Monday 19 May 2025 21:53:55 +0000 (0:00:00.992) 0:00:18.094 ************ 2025-05-19 21:58:22.037248 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.037259 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.037270 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.037281 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.037298 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.037313 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.037324 | orchestrator | 2025-05-19 21:58:22.037335 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-19 21:58:22.037362 | orchestrator | Monday 19 May 2025 21:53:57 +0000 (0:00:01.927) 0:00:20.022 ************ 2025-05-19 21:58:22.037383 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.037404 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.037423 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.037435 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.037446 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.037457 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.037467 | orchestrator | 2025-05-19 21:58:22.037478 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-19 21:58:22.037489 | orchestrator | Monday 19 May 2025 21:53:58 +0000 (0:00:01.125) 0:00:21.147 ************ 2025-05-19 21:58:22.037500 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-19 21:58:22.037511 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-19 21:58:22.037522 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.037532 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-19 21:58:22.037543 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-19 21:58:22.037554 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.037564 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-19 21:58:22.037575 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-19 21:58:22.037586 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.037597 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-19 21:58:22.037608 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-19 21:58:22.037618 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.037629 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-19 21:58:22.037640 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-19 21:58:22.037651 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.037661 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-19 21:58:22.037672 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-19 21:58:22.037683 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.037693 | orchestrator | 2025-05-19 21:58:22.037704 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-19 21:58:22.037724 | orchestrator | Monday 19 May 2025 21:54:00 +0000 (0:00:01.376) 0:00:22.524 ************ 2025-05-19 21:58:22.037736 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.037747 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.037757 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.037768 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.037779 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.037790 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.037800 | orchestrator | 2025-05-19 21:58:22.037811 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-19 21:58:22.037822 | orchestrator | 2025-05-19 21:58:22.037833 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-19 21:58:22.037844 | orchestrator | Monday 19 May 2025 21:54:01 +0000 (0:00:01.684) 0:00:24.208 ************ 2025-05-19 21:58:22.037855 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.037866 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.037884 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.037895 | orchestrator | 2025-05-19 21:58:22.037905 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-19 21:58:22.037916 | orchestrator | Monday 19 May 2025 21:54:03 +0000 (0:00:01.661) 0:00:25.870 ************ 2025-05-19 21:58:22.037927 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.037938 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.037949 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.037960 | orchestrator | 2025-05-19 21:58:22.037971 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-19 21:58:22.037982 | orchestrator | Monday 19 May 2025 21:54:04 +0000 (0:00:01.425) 0:00:27.295 ************ 2025-05-19 21:58:22.037992 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.038013 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.038100 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.038159 | orchestrator | 2025-05-19 21:58:22.038172 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-19 21:58:22.038183 | orchestrator | Monday 19 May 2025 21:54:06 +0000 (0:00:01.172) 0:00:28.468 ************ 2025-05-19 21:58:22.038194 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.038205 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.038216 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.038226 | orchestrator | 2025-05-19 21:58:22.038237 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-19 21:58:22.038248 | orchestrator | Monday 19 May 2025 21:54:06 +0000 (0:00:00.794) 0:00:29.262 ************ 2025-05-19 21:58:22.038259 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.038270 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038281 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038292 | orchestrator | 2025-05-19 21:58:22.038303 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-19 21:58:22.038313 | orchestrator | Monday 19 May 2025 21:54:07 +0000 (0:00:00.359) 0:00:29.622 ************ 2025-05-19 21:58:22.038324 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:58:22.038335 | orchestrator | 2025-05-19 21:58:22.038347 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-19 21:58:22.038357 | orchestrator | Monday 19 May 2025 21:54:07 +0000 (0:00:00.493) 0:00:30.116 ************ 2025-05-19 21:58:22.038382 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.038393 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.038404 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.038415 | orchestrator | 2025-05-19 21:58:22.038426 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-19 21:58:22.038437 | orchestrator | Monday 19 May 2025 21:54:10 +0000 (0:00:02.641) 0:00:32.757 ************ 2025-05-19 21:58:22.038448 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038459 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038470 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.038480 | orchestrator | 2025-05-19 21:58:22.038491 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-19 21:58:22.038502 | orchestrator | Monday 19 May 2025 21:54:11 +0000 (0:00:00.856) 0:00:33.613 ************ 2025-05-19 21:58:22.038513 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038524 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038535 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.038545 | orchestrator | 2025-05-19 21:58:22.038556 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-19 21:58:22.038567 | orchestrator | Monday 19 May 2025 21:54:12 +0000 (0:00:01.108) 0:00:34.722 ************ 2025-05-19 21:58:22.038578 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038589 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038600 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.038611 | orchestrator | 2025-05-19 21:58:22.038622 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-19 21:58:22.038642 | orchestrator | Monday 19 May 2025 21:54:14 +0000 (0:00:02.128) 0:00:36.851 ************ 2025-05-19 21:58:22.038653 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.038664 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038675 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038685 | orchestrator | 2025-05-19 21:58:22.038696 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-19 21:58:22.038707 | orchestrator | Monday 19 May 2025 21:54:15 +0000 (0:00:00.559) 0:00:37.410 ************ 2025-05-19 21:58:22.038718 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.038729 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.038740 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.038751 | orchestrator | 2025-05-19 21:58:22.038762 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-19 21:58:22.038773 | orchestrator | Monday 19 May 2025 21:54:15 +0000 (0:00:00.538) 0:00:37.949 ************ 2025-05-19 21:58:22.038783 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.038794 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.038805 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.038816 | orchestrator | 2025-05-19 21:58:22.038827 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-19 21:58:22.038838 | orchestrator | Monday 19 May 2025 21:54:17 +0000 (0:00:01.934) 0:00:39.883 ************ 2025-05-19 21:58:22.038856 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 21:58:22.038868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 21:58:22.038879 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-19 21:58:22.038890 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 21:58:22.038901 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 21:58:22.038912 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-19 21:58:22.038923 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 21:58:22.038939 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 21:58:22.038951 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-19 21:58:22.038962 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 21:58:22.038973 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 21:58:22.038983 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-19 21:58:22.038994 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 21:58:22.039005 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 21:58:22.039016 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-19 21:58:22.039034 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.039055 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.039076 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.039097 | orchestrator | 2025-05-19 21:58:22.039137 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-19 21:58:22.039157 | orchestrator | Monday 19 May 2025 21:55:13 +0000 (0:00:56.181) 0:01:36.064 ************ 2025-05-19 21:58:22.039171 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.039182 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.039192 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.039203 | orchestrator | 2025-05-19 21:58:22.039214 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-19 21:58:22.039224 | orchestrator | Monday 19 May 2025 21:55:13 +0000 (0:00:00.275) 0:01:36.340 ************ 2025-05-19 21:58:22.039235 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039246 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039257 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039267 | orchestrator | 2025-05-19 21:58:22.039278 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-19 21:58:22.039289 | orchestrator | Monday 19 May 2025 21:55:14 +0000 (0:00:00.940) 0:01:37.280 ************ 2025-05-19 21:58:22.039299 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039310 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039321 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039331 | orchestrator | 2025-05-19 21:58:22.039342 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-19 21:58:22.039353 | orchestrator | Monday 19 May 2025 21:55:16 +0000 (0:00:01.181) 0:01:38.462 ************ 2025-05-19 21:58:22.039364 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039374 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039385 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039395 | orchestrator | 2025-05-19 21:58:22.039406 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-19 21:58:22.039417 | orchestrator | Monday 19 May 2025 21:55:31 +0000 (0:00:15.094) 0:01:53.556 ************ 2025-05-19 21:58:22.039428 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.039438 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.039449 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.039460 | orchestrator | 2025-05-19 21:58:22.039470 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-19 21:58:22.039481 | orchestrator | Monday 19 May 2025 21:55:32 +0000 (0:00:00.871) 0:01:54.428 ************ 2025-05-19 21:58:22.039492 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.039502 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.039513 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.039523 | orchestrator | 2025-05-19 21:58:22.039534 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-19 21:58:22.039545 | orchestrator | Monday 19 May 2025 21:55:32 +0000 (0:00:00.694) 0:01:55.122 ************ 2025-05-19 21:58:22.039556 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039567 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039577 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039588 | orchestrator | 2025-05-19 21:58:22.039606 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-19 21:58:22.039618 | orchestrator | Monday 19 May 2025 21:55:33 +0000 (0:00:00.709) 0:01:55.832 ************ 2025-05-19 21:58:22.039628 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.039639 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.039650 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.039661 | orchestrator | 2025-05-19 21:58:22.039672 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-19 21:58:22.039682 | orchestrator | Monday 19 May 2025 21:55:34 +0000 (0:00:01.084) 0:01:56.916 ************ 2025-05-19 21:58:22.039693 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.039704 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.039723 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.039734 | orchestrator | 2025-05-19 21:58:22.039745 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-19 21:58:22.039756 | orchestrator | Monday 19 May 2025 21:55:34 +0000 (0:00:00.281) 0:01:57.197 ************ 2025-05-19 21:58:22.039767 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039777 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039788 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039799 | orchestrator | 2025-05-19 21:58:22.039809 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-19 21:58:22.039820 | orchestrator | Monday 19 May 2025 21:55:35 +0000 (0:00:00.625) 0:01:57.823 ************ 2025-05-19 21:58:22.039831 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039842 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039852 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039863 | orchestrator | 2025-05-19 21:58:22.039879 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-19 21:58:22.039890 | orchestrator | Monday 19 May 2025 21:55:36 +0000 (0:00:00.631) 0:01:58.455 ************ 2025-05-19 21:58:22.039901 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039912 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039923 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039934 | orchestrator | 2025-05-19 21:58:22.039945 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-19 21:58:22.039956 | orchestrator | Monday 19 May 2025 21:55:37 +0000 (0:00:01.111) 0:01:59.566 ************ 2025-05-19 21:58:22.039967 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:22.039978 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:22.039988 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:22.039999 | orchestrator | 2025-05-19 21:58:22.040010 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-19 21:58:22.040021 | orchestrator | Monday 19 May 2025 21:55:38 +0000 (0:00:00.860) 0:02:00.427 ************ 2025-05-19 21:58:22.040032 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.040043 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.040054 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.040064 | orchestrator | 2025-05-19 21:58:22.040076 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-19 21:58:22.040096 | orchestrator | Monday 19 May 2025 21:55:38 +0000 (0:00:00.265) 0:02:00.692 ************ 2025-05-19 21:58:22.040134 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.040156 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.040176 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.040194 | orchestrator | 2025-05-19 21:58:22.040208 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-19 21:58:22.040219 | orchestrator | Monday 19 May 2025 21:55:38 +0000 (0:00:00.320) 0:02:01.013 ************ 2025-05-19 21:58:22.040230 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.040241 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.040251 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.040262 | orchestrator | 2025-05-19 21:58:22.040273 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-19 21:58:22.040284 | orchestrator | Monday 19 May 2025 21:55:39 +0000 (0:00:01.114) 0:02:02.128 ************ 2025-05-19 21:58:22.040294 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.040305 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.040316 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.040326 | orchestrator | 2025-05-19 21:58:22.040337 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-19 21:58:22.040349 | orchestrator | Monday 19 May 2025 21:55:40 +0000 (0:00:00.703) 0:02:02.831 ************ 2025-05-19 21:58:22.040359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 21:58:22.040370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 21:58:22.040389 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-19 21:58:22.040399 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 21:58:22.040410 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 21:58:22.040421 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-19 21:58:22.040432 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 21:58:22.040443 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 21:58:22.040454 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-19 21:58:22.040465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-19 21:58:22.040475 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 21:58:22.040486 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 21:58:22.040504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-19 21:58:22.040515 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 21:58:22.040526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 21:58:22.040537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-19 21:58:22.040547 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 21:58:22.040558 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 21:58:22.040569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-19 21:58:22.040580 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-19 21:58:22.040591 | orchestrator | 2025-05-19 21:58:22.040602 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-19 21:58:22.040613 | orchestrator | 2025-05-19 21:58:22.040624 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-19 21:58:22.040635 | orchestrator | Monday 19 May 2025 21:55:43 +0000 (0:00:02.934) 0:02:05.766 ************ 2025-05-19 21:58:22.040645 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.040662 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.040673 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.040684 | orchestrator | 2025-05-19 21:58:22.040695 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-19 21:58:22.040706 | orchestrator | Monday 19 May 2025 21:55:43 +0000 (0:00:00.498) 0:02:06.265 ************ 2025-05-19 21:58:22.040717 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.040728 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.040739 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.040750 | orchestrator | 2025-05-19 21:58:22.040761 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-19 21:58:22.040772 | orchestrator | Monday 19 May 2025 21:55:44 +0000 (0:00:00.558) 0:02:06.823 ************ 2025-05-19 21:58:22.040783 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.040794 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.040805 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.040816 | orchestrator | 2025-05-19 21:58:22.040827 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-19 21:58:22.040838 | orchestrator | Monday 19 May 2025 21:55:44 +0000 (0:00:00.290) 0:02:07.114 ************ 2025-05-19 21:58:22.040860 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 21:58:22.040871 | orchestrator | 2025-05-19 21:58:22.040882 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-19 21:58:22.040893 | orchestrator | Monday 19 May 2025 21:55:45 +0000 (0:00:00.633) 0:02:07.748 ************ 2025-05-19 21:58:22.040904 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.040915 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.040926 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.040936 | orchestrator | 2025-05-19 21:58:22.040947 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-19 21:58:22.040958 | orchestrator | Monday 19 May 2025 21:55:45 +0000 (0:00:00.289) 0:02:08.037 ************ 2025-05-19 21:58:22.040969 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.040980 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.040991 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.041002 | orchestrator | 2025-05-19 21:58:22.041013 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-19 21:58:22.041023 | orchestrator | Monday 19 May 2025 21:55:45 +0000 (0:00:00.294) 0:02:08.332 ************ 2025-05-19 21:58:22.041034 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.041045 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.041056 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.041067 | orchestrator | 2025-05-19 21:58:22.041078 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-19 21:58:22.041089 | orchestrator | Monday 19 May 2025 21:55:46 +0000 (0:00:00.273) 0:02:08.605 ************ 2025-05-19 21:58:22.041099 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.041171 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.041194 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.041206 | orchestrator | 2025-05-19 21:58:22.041217 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-19 21:58:22.041228 | orchestrator | Monday 19 May 2025 21:55:47 +0000 (0:00:01.366) 0:02:09.972 ************ 2025-05-19 21:58:22.041239 | orchestrator | changed: [testbed-node-5] 2025-05-19 21:58:22.041250 | orchestrator | changed: [testbed-node-4] 2025-05-19 21:58:22.041260 | orchestrator | changed: [testbed-node-3] 2025-05-19 21:58:22.041271 | orchestrator | 2025-05-19 21:58:22.041282 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-19 21:58:22.041292 | orchestrator | 2025-05-19 21:58:22.041303 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-19 21:58:22.041314 | orchestrator | Monday 19 May 2025 21:55:56 +0000 (0:00:08.503) 0:02:18.475 ************ 2025-05-19 21:58:22.041324 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.041335 | orchestrator | 2025-05-19 21:58:22.041346 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-19 21:58:22.041357 | orchestrator | Monday 19 May 2025 21:55:56 +0000 (0:00:00.693) 0:02:19.169 ************ 2025-05-19 21:58:22.041367 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041378 | orchestrator | 2025-05-19 21:58:22.041388 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 21:58:22.041398 | orchestrator | Monday 19 May 2025 21:55:57 +0000 (0:00:00.395) 0:02:19.565 ************ 2025-05-19 21:58:22.041408 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 21:58:22.041417 | orchestrator | 2025-05-19 21:58:22.041433 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 21:58:22.041443 | orchestrator | Monday 19 May 2025 21:55:58 +0000 (0:00:00.913) 0:02:20.479 ************ 2025-05-19 21:58:22.041453 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041462 | orchestrator | 2025-05-19 21:58:22.041472 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-19 21:58:22.041482 | orchestrator | Monday 19 May 2025 21:55:59 +0000 (0:00:00.904) 0:02:21.384 ************ 2025-05-19 21:58:22.041498 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041508 | orchestrator | 2025-05-19 21:58:22.041518 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-19 21:58:22.041527 | orchestrator | Monday 19 May 2025 21:55:59 +0000 (0:00:00.651) 0:02:22.036 ************ 2025-05-19 21:58:22.041537 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 21:58:22.041547 | orchestrator | 2025-05-19 21:58:22.041556 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-19 21:58:22.041566 | orchestrator | Monday 19 May 2025 21:56:01 +0000 (0:00:01.606) 0:02:23.642 ************ 2025-05-19 21:58:22.041575 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 21:58:22.041585 | orchestrator | 2025-05-19 21:58:22.041595 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-19 21:58:22.041604 | orchestrator | Monday 19 May 2025 21:56:02 +0000 (0:00:00.939) 0:02:24.581 ************ 2025-05-19 21:58:22.041614 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041624 | orchestrator | 2025-05-19 21:58:22.041633 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-19 21:58:22.041647 | orchestrator | Monday 19 May 2025 21:56:02 +0000 (0:00:00.450) 0:02:25.032 ************ 2025-05-19 21:58:22.041657 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041667 | orchestrator | 2025-05-19 21:58:22.041676 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-19 21:58:22.041686 | orchestrator | 2025-05-19 21:58:22.041696 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-19 21:58:22.041705 | orchestrator | Monday 19 May 2025 21:56:03 +0000 (0:00:00.458) 0:02:25.491 ************ 2025-05-19 21:58:22.041715 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.041725 | orchestrator | 2025-05-19 21:58:22.041734 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-19 21:58:22.041744 | orchestrator | Monday 19 May 2025 21:56:03 +0000 (0:00:00.158) 0:02:25.649 ************ 2025-05-19 21:58:22.041753 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:58:22.041763 | orchestrator | 2025-05-19 21:58:22.041773 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-19 21:58:22.041782 | orchestrator | Monday 19 May 2025 21:56:03 +0000 (0:00:00.210) 0:02:25.860 ************ 2025-05-19 21:58:22.041792 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.041802 | orchestrator | 2025-05-19 21:58:22.041811 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-19 21:58:22.041821 | orchestrator | Monday 19 May 2025 21:56:04 +0000 (0:00:01.356) 0:02:27.216 ************ 2025-05-19 21:58:22.041830 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.041840 | orchestrator | 2025-05-19 21:58:22.041850 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-19 21:58:22.041860 | orchestrator | Monday 19 May 2025 21:56:06 +0000 (0:00:01.570) 0:02:28.787 ************ 2025-05-19 21:58:22.041869 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041879 | orchestrator | 2025-05-19 21:58:22.041888 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-19 21:58:22.041898 | orchestrator | Monday 19 May 2025 21:56:07 +0000 (0:00:00.814) 0:02:29.601 ************ 2025-05-19 21:58:22.041907 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.041917 | orchestrator | 2025-05-19 21:58:22.041927 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-19 21:58:22.041936 | orchestrator | Monday 19 May 2025 21:56:07 +0000 (0:00:00.469) 0:02:30.071 ************ 2025-05-19 21:58:22.041946 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041956 | orchestrator | 2025-05-19 21:58:22.041965 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-19 21:58:22.041975 | orchestrator | Monday 19 May 2025 21:56:14 +0000 (0:00:06.783) 0:02:36.854 ************ 2025-05-19 21:58:22.041984 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.041994 | orchestrator | 2025-05-19 21:58:22.042010 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-19 21:58:22.042043 | orchestrator | Monday 19 May 2025 21:56:26 +0000 (0:00:11.649) 0:02:48.504 ************ 2025-05-19 21:58:22.042054 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.042064 | orchestrator | 2025-05-19 21:58:22.042074 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-19 21:58:22.042083 | orchestrator | 2025-05-19 21:58:22.042093 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-19 21:58:22.042103 | orchestrator | Monday 19 May 2025 21:56:26 +0000 (0:00:00.477) 0:02:48.981 ************ 2025-05-19 21:58:22.042127 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.042137 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.042147 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.042156 | orchestrator | 2025-05-19 21:58:22.042166 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-19 21:58:22.042176 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:00.554) 0:02:49.536 ************ 2025-05-19 21:58:22.042186 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042195 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.042205 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.042215 | orchestrator | 2025-05-19 21:58:22.042224 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-19 21:58:22.042234 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:00.308) 0:02:49.844 ************ 2025-05-19 21:58:22.042244 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:58:22.042261 | orchestrator | 2025-05-19 21:58:22.042272 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-19 21:58:22.042288 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:00.494) 0:02:50.339 ************ 2025-05-19 21:58:22.042298 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042308 | orchestrator | 2025-05-19 21:58:22.042318 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-19 21:58:22.042327 | orchestrator | Monday 19 May 2025 21:56:29 +0000 (0:00:01.039) 0:02:51.378 ************ 2025-05-19 21:58:22.042337 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042347 | orchestrator | 2025-05-19 21:58:22.042356 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-19 21:58:22.042366 | orchestrator | Monday 19 May 2025 21:56:30 +0000 (0:00:01.119) 0:02:52.497 ************ 2025-05-19 21:58:22.042376 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042386 | orchestrator | 2025-05-19 21:58:22.042396 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-19 21:58:22.042405 | orchestrator | Monday 19 May 2025 21:56:30 +0000 (0:00:00.727) 0:02:53.224 ************ 2025-05-19 21:58:22.042415 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042425 | orchestrator | 2025-05-19 21:58:22.042435 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-19 21:58:22.042444 | orchestrator | Monday 19 May 2025 21:56:32 +0000 (0:00:01.195) 0:02:54.420 ************ 2025-05-19 21:58:22.042454 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042464 | orchestrator | 2025-05-19 21:58:22.042474 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-19 21:58:22.042488 | orchestrator | Monday 19 May 2025 21:56:32 +0000 (0:00:00.236) 0:02:54.656 ************ 2025-05-19 21:58:22.042498 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042508 | orchestrator | 2025-05-19 21:58:22.042518 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-19 21:58:22.042528 | orchestrator | Monday 19 May 2025 21:56:32 +0000 (0:00:00.252) 0:02:54.908 ************ 2025-05-19 21:58:22.042537 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042547 | orchestrator | 2025-05-19 21:58:22.042557 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-19 21:58:22.042573 | orchestrator | Monday 19 May 2025 21:56:32 +0000 (0:00:00.245) 0:02:55.153 ************ 2025-05-19 21:58:22.042583 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042593 | orchestrator | 2025-05-19 21:58:22.042603 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-19 21:58:22.042613 | orchestrator | Monday 19 May 2025 21:56:33 +0000 (0:00:00.248) 0:02:55.402 ************ 2025-05-19 21:58:22.042622 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042632 | orchestrator | 2025-05-19 21:58:22.042641 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-19 21:58:22.042651 | orchestrator | Monday 19 May 2025 21:56:38 +0000 (0:00:05.326) 0:03:00.729 ************ 2025-05-19 21:58:22.042661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-19 21:58:22.042670 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-19 21:58:22.042680 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-19 21:58:22.042690 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-19 21:58:22.042700 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-19 21:58:22.042709 | orchestrator | 2025-05-19 21:58:22.042719 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-19 21:58:22.042729 | orchestrator | Monday 19 May 2025 21:57:47 +0000 (0:01:08.803) 0:04:09.532 ************ 2025-05-19 21:58:22.042738 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042748 | orchestrator | 2025-05-19 21:58:22.042757 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-19 21:58:22.042767 | orchestrator | Monday 19 May 2025 21:57:48 +0000 (0:00:01.356) 0:04:10.888 ************ 2025-05-19 21:58:22.042777 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042786 | orchestrator | 2025-05-19 21:58:22.042796 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-19 21:58:22.042806 | orchestrator | Monday 19 May 2025 21:57:50 +0000 (0:00:01.924) 0:04:12.812 ************ 2025-05-19 21:58:22.042816 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 21:58:22.042825 | orchestrator | 2025-05-19 21:58:22.042835 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-19 21:58:22.042845 | orchestrator | Monday 19 May 2025 21:57:51 +0000 (0:00:01.254) 0:04:14.067 ************ 2025-05-19 21:58:22.042855 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042865 | orchestrator | 2025-05-19 21:58:22.042874 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-19 21:58:22.042884 | orchestrator | Monday 19 May 2025 21:57:51 +0000 (0:00:00.225) 0:04:14.293 ************ 2025-05-19 21:58:22.042894 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-19 21:58:22.042904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-19 21:58:22.042914 | orchestrator | 2025-05-19 21:58:22.042923 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-19 21:58:22.042934 | orchestrator | Monday 19 May 2025 21:57:54 +0000 (0:00:02.822) 0:04:17.116 ************ 2025-05-19 21:58:22.042943 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.042953 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.042963 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.042972 | orchestrator | 2025-05-19 21:58:22.042982 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-19 21:58:22.042992 | orchestrator | Monday 19 May 2025 21:57:55 +0000 (0:00:00.380) 0:04:17.496 ************ 2025-05-19 21:58:22.043001 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.043011 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.043021 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.043030 | orchestrator | 2025-05-19 21:58:22.043045 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-19 21:58:22.043060 | orchestrator | 2025-05-19 21:58:22.043070 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-19 21:58:22.043080 | orchestrator | Monday 19 May 2025 21:57:56 +0000 (0:00:00.953) 0:04:18.450 ************ 2025-05-19 21:58:22.043090 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:22.043099 | orchestrator | 2025-05-19 21:58:22.043122 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-19 21:58:22.043133 | orchestrator | Monday 19 May 2025 21:57:56 +0000 (0:00:00.162) 0:04:18.612 ************ 2025-05-19 21:58:22.043142 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-19 21:58:22.043152 | orchestrator | 2025-05-19 21:58:22.043161 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-19 21:58:22.043171 | orchestrator | Monday 19 May 2025 21:57:56 +0000 (0:00:00.446) 0:04:19.059 ************ 2025-05-19 21:58:22.043181 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:22.043190 | orchestrator | 2025-05-19 21:58:22.043200 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-19 21:58:22.043210 | orchestrator | 2025-05-19 21:58:22.043219 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-19 21:58:22.043229 | orchestrator | Monday 19 May 2025 21:58:03 +0000 (0:00:06.444) 0:04:25.504 ************ 2025-05-19 21:58:22.043239 | orchestrator | ok: [testbed-node-3] 2025-05-19 21:58:22.043249 | orchestrator | ok: [testbed-node-4] 2025-05-19 21:58:22.043258 | orchestrator | ok: [testbed-node-5] 2025-05-19 21:58:22.043268 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:22.043282 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:22.043292 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:22.043301 | orchestrator | 2025-05-19 21:58:22.043311 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-19 21:58:22.043321 | orchestrator | Monday 19 May 2025 21:58:03 +0000 (0:00:00.653) 0:04:26.157 ************ 2025-05-19 21:58:22.043331 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 21:58:22.043340 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 21:58:22.043350 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-19 21:58:22.043360 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 21:58:22.043369 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 21:58:22.043379 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 21:58:22.043389 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-19 21:58:22.043399 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 21:58:22.043408 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 21:58:22.043418 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-19 21:58:22.043428 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 21:58:22.043437 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 21:58:22.043447 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-19 21:58:22.043456 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 21:58:22.043466 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 21:58:22.043475 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-19 21:58:22.043485 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 21:58:22.043499 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-19 21:58:22.043509 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 21:58:22.043519 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 21:58:22.043528 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-19 21:58:22.043538 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 21:58:22.043547 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 21:58:22.043557 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-19 21:58:22.043567 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 21:58:22.043576 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 21:58:22.043586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-19 21:58:22.043595 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 21:58:22.043605 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 21:58:22.043615 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-19 21:58:22.043624 | orchestrator | 2025-05-19 21:58:22.043639 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-19 21:58:22.043649 | orchestrator | Monday 19 May 2025 21:58:18 +0000 (0:00:14.705) 0:04:40.863 ************ 2025-05-19 21:58:22.043659 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.043669 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.043678 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.043688 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.043697 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.043707 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.043716 | orchestrator | 2025-05-19 21:58:22.043726 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-19 21:58:22.043736 | orchestrator | Monday 19 May 2025 21:58:18 +0000 (0:00:00.490) 0:04:41.353 ************ 2025-05-19 21:58:22.043745 | orchestrator | skipping: [testbed-node-3] 2025-05-19 21:58:22.043755 | orchestrator | skipping: [testbed-node-4] 2025-05-19 21:58:22.043764 | orchestrator | skipping: [testbed-node-5] 2025-05-19 21:58:22.043774 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:22.043783 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:22.043793 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:22.043802 | orchestrator | 2025-05-19 21:58:22.043812 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:58:22.043822 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:58:22.043833 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-19 21:58:22.043843 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-19 21:58:22.043853 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-19 21:58:22.043862 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 21:58:22.043872 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 21:58:22.043888 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-19 21:58:22.043898 | orchestrator | 2025-05-19 21:58:22.043907 | orchestrator | 2025-05-19 21:58:22.043917 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:58:22.043927 | orchestrator | Monday 19 May 2025 21:58:19 +0000 (0:00:00.601) 0:04:41.955 ************ 2025-05-19 21:58:22.043937 | orchestrator | =============================================================================== 2025-05-19 21:58:22.043946 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 68.80s 2025-05-19 21:58:22.044546 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.18s 2025-05-19 21:58:22.044565 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.09s 2025-05-19 21:58:22.044573 | orchestrator | Manage labels ---------------------------------------------------------- 14.71s 2025-05-19 21:58:22.044581 | orchestrator | kubectl : Install required packages ------------------------------------ 11.65s 2025-05-19 21:58:22.044589 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.50s 2025-05-19 21:58:22.044596 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.78s 2025-05-19 21:58:22.044604 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.44s 2025-05-19 21:58:22.044612 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.88s 2025-05-19 21:58:22.044620 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.33s 2025-05-19 21:58:22.044628 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.93s 2025-05-19 21:58:22.044636 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.82s 2025-05-19 21:58:22.044644 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.64s 2025-05-19 21:58:22.044652 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.19s 2025-05-19 21:58:22.044660 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.13s 2025-05-19 21:58:22.044668 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.93s 2025-05-19 21:58:22.044676 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.93s 2025-05-19 21:58:22.044683 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.92s 2025-05-19 21:58:22.044691 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.68s 2025-05-19 21:58:22.044699 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.66s 2025-05-19 21:58:22.044707 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:22.044722 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task 412d5fa1-dae5-4b6f-9eea-6269ab43775f is in state STARTED 2025-05-19 21:58:22.044730 | orchestrator | 2025-05-19 21:58:22 | INFO  | Task 3b095f86-b4f7-41dc-ac5c-63fde1cd88f4 is in state STARTED 2025-05-19 21:58:22.044738 | orchestrator | 2025-05-19 21:58:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:25.089346 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:25.089443 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:25.089457 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:25.090389 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:25.090737 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task 412d5fa1-dae5-4b6f-9eea-6269ab43775f is in state STARTED 2025-05-19 21:58:25.092837 | orchestrator | 2025-05-19 21:58:25 | INFO  | Task 3b095f86-b4f7-41dc-ac5c-63fde1cd88f4 is in state STARTED 2025-05-19 21:58:25.092877 | orchestrator | 2025-05-19 21:58:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:28.138328 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:28.138425 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:28.138441 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:28.138453 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:28.138788 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task 412d5fa1-dae5-4b6f-9eea-6269ab43775f is in state STARTED 2025-05-19 21:58:28.138932 | orchestrator | 2025-05-19 21:58:28 | INFO  | Task 3b095f86-b4f7-41dc-ac5c-63fde1cd88f4 is in state SUCCESS 2025-05-19 21:58:28.139027 | orchestrator | 2025-05-19 21:58:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:31.172864 | orchestrator | 2025-05-19 21:58:31 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:31.172975 | orchestrator | 2025-05-19 21:58:31 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:31.173298 | orchestrator | 2025-05-19 21:58:31 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:31.176530 | orchestrator | 2025-05-19 21:58:31 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:31.176625 | orchestrator | 2025-05-19 21:58:31 | INFO  | Task 412d5fa1-dae5-4b6f-9eea-6269ab43775f is in state SUCCESS 2025-05-19 21:58:31.176641 | orchestrator | 2025-05-19 21:58:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:34.240675 | orchestrator | 2025-05-19 21:58:34 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:34.242560 | orchestrator | 2025-05-19 21:58:34 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:34.244460 | orchestrator | 2025-05-19 21:58:34 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:34.246173 | orchestrator | 2025-05-19 21:58:34 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:34.246418 | orchestrator | 2025-05-19 21:58:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:37.300621 | orchestrator | 2025-05-19 21:58:37 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:37.300721 | orchestrator | 2025-05-19 21:58:37 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:37.300736 | orchestrator | 2025-05-19 21:58:37 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:37.302387 | orchestrator | 2025-05-19 21:58:37 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:37.302633 | orchestrator | 2025-05-19 21:58:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:40.352609 | orchestrator | 2025-05-19 21:58:40 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:40.352724 | orchestrator | 2025-05-19 21:58:40 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:40.352740 | orchestrator | 2025-05-19 21:58:40 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:40.353061 | orchestrator | 2025-05-19 21:58:40 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:40.353218 | orchestrator | 2025-05-19 21:58:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:43.415308 | orchestrator | 2025-05-19 21:58:43 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:43.415482 | orchestrator | 2025-05-19 21:58:43 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:43.416892 | orchestrator | 2025-05-19 21:58:43 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:43.417867 | orchestrator | 2025-05-19 21:58:43 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:43.417889 | orchestrator | 2025-05-19 21:58:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:46.463603 | orchestrator | 2025-05-19 21:58:46 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:46.464243 | orchestrator | 2025-05-19 21:58:46 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:46.465425 | orchestrator | 2025-05-19 21:58:46 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:46.466337 | orchestrator | 2025-05-19 21:58:46 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:46.467135 | orchestrator | 2025-05-19 21:58:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:49.521680 | orchestrator | 2025-05-19 21:58:49 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:49.521775 | orchestrator | 2025-05-19 21:58:49 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:49.521789 | orchestrator | 2025-05-19 21:58:49 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:49.524937 | orchestrator | 2025-05-19 21:58:49 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:49.524976 | orchestrator | 2025-05-19 21:58:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:52.571614 | orchestrator | 2025-05-19 21:58:52 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:52.573115 | orchestrator | 2025-05-19 21:58:52 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:52.575069 | orchestrator | 2025-05-19 21:58:52 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:52.576431 | orchestrator | 2025-05-19 21:58:52 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:52.576463 | orchestrator | 2025-05-19 21:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:55.628793 | orchestrator | 2025-05-19 21:58:55 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:55.631174 | orchestrator | 2025-05-19 21:58:55 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:55.633604 | orchestrator | 2025-05-19 21:58:55 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state STARTED 2025-05-19 21:58:55.634520 | orchestrator | 2025-05-19 21:58:55 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:55.634553 | orchestrator | 2025-05-19 21:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:58:58.682675 | orchestrator | 2025-05-19 21:58:58 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:58:58.683885 | orchestrator | 2025-05-19 21:58:58 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:58:58.685351 | orchestrator | 2025-05-19 21:58:58.685380 | orchestrator | 2025-05-19 21:58:58.685392 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-19 21:58:58.685404 | orchestrator | 2025-05-19 21:58:58.685415 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 21:58:58.685427 | orchestrator | Monday 19 May 2025 21:58:23 +0000 (0:00:00.164) 0:00:00.164 ************ 2025-05-19 21:58:58.685439 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 21:58:58.685450 | orchestrator | 2025-05-19 21:58:58.685460 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 21:58:58.685471 | orchestrator | Monday 19 May 2025 21:58:24 +0000 (0:00:00.753) 0:00:00.918 ************ 2025-05-19 21:58:58.685482 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:58.685493 | orchestrator | 2025-05-19 21:58:58.685504 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-19 21:58:58.685516 | orchestrator | Monday 19 May 2025 21:58:25 +0000 (0:00:01.036) 0:00:01.954 ************ 2025-05-19 21:58:58.685526 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:58.685537 | orchestrator | 2025-05-19 21:58:58.685548 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:58:58.685559 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:58:58.685572 | orchestrator | 2025-05-19 21:58:58.685583 | orchestrator | 2025-05-19 21:58:58.685594 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:58:58.685622 | orchestrator | Monday 19 May 2025 21:58:25 +0000 (0:00:00.386) 0:00:02.341 ************ 2025-05-19 21:58:58.685633 | orchestrator | =============================================================================== 2025-05-19 21:58:58.685644 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2025-05-19 21:58:58.685655 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-05-19 21:58:58.685666 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2025-05-19 21:58:58.685681 | orchestrator | 2025-05-19 21:58:58.685702 | orchestrator | 2025-05-19 21:58:58.685732 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-19 21:58:58.685748 | orchestrator | 2025-05-19 21:58:58.685765 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-19 21:58:58.685782 | orchestrator | Monday 19 May 2025 21:58:23 +0000 (0:00:00.184) 0:00:00.184 ************ 2025-05-19 21:58:58.685800 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:58.685819 | orchestrator | 2025-05-19 21:58:58.685831 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-19 21:58:58.685842 | orchestrator | Monday 19 May 2025 21:58:24 +0000 (0:00:00.517) 0:00:00.702 ************ 2025-05-19 21:58:58.685853 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:58.685863 | orchestrator | 2025-05-19 21:58:58.685874 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-19 21:58:58.685885 | orchestrator | Monday 19 May 2025 21:58:24 +0000 (0:00:00.442) 0:00:01.145 ************ 2025-05-19 21:58:58.685896 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-19 21:58:58.685907 | orchestrator | 2025-05-19 21:58:58.685918 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-19 21:58:58.685929 | orchestrator | Monday 19 May 2025 21:58:25 +0000 (0:00:00.685) 0:00:01.830 ************ 2025-05-19 21:58:58.685940 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:58.685951 | orchestrator | 2025-05-19 21:58:58.685962 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-19 21:58:58.685973 | orchestrator | Monday 19 May 2025 21:58:26 +0000 (0:00:00.983) 0:00:02.814 ************ 2025-05-19 21:58:58.685998 | orchestrator | changed: [testbed-manager] 2025-05-19 21:58:58.686009 | orchestrator | 2025-05-19 21:58:58.686084 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-19 21:58:58.686147 | orchestrator | Monday 19 May 2025 21:58:27 +0000 (0:00:00.543) 0:00:03.357 ************ 2025-05-19 21:58:58.686158 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 21:58:58.686169 | orchestrator | 2025-05-19 21:58:58.686180 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-19 21:58:58.686191 | orchestrator | Monday 19 May 2025 21:58:28 +0000 (0:00:01.312) 0:00:04.670 ************ 2025-05-19 21:58:58.686202 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 21:58:58.686213 | orchestrator | 2025-05-19 21:58:58.686224 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-19 21:58:58.686235 | orchestrator | Monday 19 May 2025 21:58:29 +0000 (0:00:00.715) 0:00:05.385 ************ 2025-05-19 21:58:58.686245 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:58.686256 | orchestrator | 2025-05-19 21:58:58.686267 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-19 21:58:58.686278 | orchestrator | Monday 19 May 2025 21:58:29 +0000 (0:00:00.339) 0:00:05.725 ************ 2025-05-19 21:58:58.686289 | orchestrator | ok: [testbed-manager] 2025-05-19 21:58:58.686300 | orchestrator | 2025-05-19 21:58:58.686311 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:58:58.686322 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 21:58:58.686333 | orchestrator | 2025-05-19 21:58:58.686344 | orchestrator | 2025-05-19 21:58:58.686355 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:58:58.686366 | orchestrator | Monday 19 May 2025 21:58:29 +0000 (0:00:00.257) 0:00:05.983 ************ 2025-05-19 21:58:58.686376 | orchestrator | =============================================================================== 2025-05-19 21:58:58.686387 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.31s 2025-05-19 21:58:58.686398 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.98s 2025-05-19 21:58:58.686409 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.72s 2025-05-19 21:58:58.686435 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-05-19 21:58:58.686446 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.54s 2025-05-19 21:58:58.686457 | orchestrator | Get home directory of operator user ------------------------------------- 0.52s 2025-05-19 21:58:58.686468 | orchestrator | Create .kube directory -------------------------------------------------- 0.44s 2025-05-19 21:58:58.686479 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.34s 2025-05-19 21:58:58.686490 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.26s 2025-05-19 21:58:58.686500 | orchestrator | 2025-05-19 21:58:58.686512 | orchestrator | 2025-05-19 21:58:58 | INFO  | Task ca288efc-b945-44d8-a5da-a2872ea1afd5 is in state SUCCESS 2025-05-19 21:58:58.688441 | orchestrator | 2025-05-19 21:58:58.688470 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-19 21:58:58.688482 | orchestrator | 2025-05-19 21:58:58.688493 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 21:58:58.688504 | orchestrator | Monday 19 May 2025 21:56:38 +0000 (0:00:00.108) 0:00:00.108 ************ 2025-05-19 21:58:58.688515 | orchestrator | ok: [localhost] => { 2025-05-19 21:58:58.688527 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-19 21:58:58.688538 | orchestrator | } 2025-05-19 21:58:58.688550 | orchestrator | 2025-05-19 21:58:58.688571 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-19 21:58:58.688582 | orchestrator | Monday 19 May 2025 21:56:38 +0000 (0:00:00.059) 0:00:00.168 ************ 2025-05-19 21:58:58.688607 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-19 21:58:58.688620 | orchestrator | ...ignoring 2025-05-19 21:58:58.688631 | orchestrator | 2025-05-19 21:58:58.688641 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-19 21:58:58.688652 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:03.210) 0:00:03.378 ************ 2025-05-19 21:58:58.688663 | orchestrator | skipping: [localhost] 2025-05-19 21:58:58.688674 | orchestrator | 2025-05-19 21:58:58.688685 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-19 21:58:58.688695 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.109) 0:00:03.487 ************ 2025-05-19 21:58:58.688706 | orchestrator | ok: [localhost] 2025-05-19 21:58:58.688717 | orchestrator | 2025-05-19 21:58:58.688728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 21:58:58.688738 | orchestrator | 2025-05-19 21:58:58.688749 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 21:58:58.688760 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.270) 0:00:03.758 ************ 2025-05-19 21:58:58.688770 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:58.688781 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:58.688792 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:58.688803 | orchestrator | 2025-05-19 21:58:58.688814 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 21:58:58.688825 | orchestrator | Monday 19 May 2025 21:56:43 +0000 (0:00:00.776) 0:00:04.534 ************ 2025-05-19 21:58:58.688835 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-19 21:58:58.688847 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-19 21:58:58.688857 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-19 21:58:58.688868 | orchestrator | 2025-05-19 21:58:58.688879 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-19 21:58:58.688890 | orchestrator | 2025-05-19 21:58:58.688900 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 21:58:58.688911 | orchestrator | Monday 19 May 2025 21:56:45 +0000 (0:00:01.906) 0:00:06.440 ************ 2025-05-19 21:58:58.688922 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:58:58.688932 | orchestrator | 2025-05-19 21:58:58.688943 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 21:58:58.688954 | orchestrator | Monday 19 May 2025 21:56:45 +0000 (0:00:00.679) 0:00:07.119 ************ 2025-05-19 21:58:58.688964 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:58.688975 | orchestrator | 2025-05-19 21:58:58.688986 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-19 21:58:58.688997 | orchestrator | Monday 19 May 2025 21:56:46 +0000 (0:00:01.101) 0:00:08.221 ************ 2025-05-19 21:58:58.689007 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689018 | orchestrator | 2025-05-19 21:58:58.689029 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-19 21:58:58.689039 | orchestrator | Monday 19 May 2025 21:56:47 +0000 (0:00:00.451) 0:00:08.673 ************ 2025-05-19 21:58:58.689050 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689061 | orchestrator | 2025-05-19 21:58:58.689072 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-19 21:58:58.689083 | orchestrator | Monday 19 May 2025 21:56:47 +0000 (0:00:00.338) 0:00:09.011 ************ 2025-05-19 21:58:58.689174 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689191 | orchestrator | 2025-05-19 21:58:58.689203 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-19 21:58:58.689214 | orchestrator | Monday 19 May 2025 21:56:48 +0000 (0:00:00.339) 0:00:09.351 ************ 2025-05-19 21:58:58.689224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689244 | orchestrator | 2025-05-19 21:58:58.689255 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 21:58:58.689266 | orchestrator | Monday 19 May 2025 21:56:48 +0000 (0:00:00.524) 0:00:09.876 ************ 2025-05-19 21:58:58.689277 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:58:58.689288 | orchestrator | 2025-05-19 21:58:58.689299 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-19 21:58:58.689310 | orchestrator | Monday 19 May 2025 21:56:49 +0000 (0:00:00.576) 0:00:10.452 ************ 2025-05-19 21:58:58.689321 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:58.689332 | orchestrator | 2025-05-19 21:58:58.689343 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-19 21:58:58.689354 | orchestrator | Monday 19 May 2025 21:56:49 +0000 (0:00:00.800) 0:00:11.252 ************ 2025-05-19 21:58:58.689364 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689375 | orchestrator | 2025-05-19 21:58:58.689386 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-19 21:58:58.689397 | orchestrator | Monday 19 May 2025 21:56:50 +0000 (0:00:00.309) 0:00:11.561 ************ 2025-05-19 21:58:58.689408 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.689419 | orchestrator | 2025-05-19 21:58:58.689448 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-19 21:58:58.689460 | orchestrator | Monday 19 May 2025 21:56:50 +0000 (0:00:00.331) 0:00:11.893 ************ 2025-05-19 21:58:58.689483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689535 | orchestrator | 2025-05-19 21:58:58.689546 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-19 21:58:58.689557 | orchestrator | Monday 19 May 2025 21:56:51 +0000 (0:00:00.846) 0:00:12.740 ************ 2025-05-19 21:58:58.689578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.689627 | orchestrator | 2025-05-19 21:58:58.689638 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-19 21:58:58.689649 | orchestrator | Monday 19 May 2025 21:56:53 +0000 (0:00:02.298) 0:00:15.038 ************ 2025-05-19 21:58:58.689660 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 21:58:58.689671 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 21:58:58.689682 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-19 21:58:58.689693 | orchestrator | 2025-05-19 21:58:58.689704 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-19 21:58:58.689714 | orchestrator | Monday 19 May 2025 21:56:55 +0000 (0:00:01.960) 0:00:16.999 ************ 2025-05-19 21:58:58.689725 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 21:58:58.689736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 21:58:58.689746 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-19 21:58:58.689757 | orchestrator | 2025-05-19 21:58:58.689768 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-19 21:58:58.689778 | orchestrator | Monday 19 May 2025 21:57:00 +0000 (0:00:05.128) 0:00:22.128 ************ 2025-05-19 21:58:58.689789 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 21:58:58.689800 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 21:58:58.689811 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-19 21:58:58.689821 | orchestrator | 2025-05-19 21:58:58.689832 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-19 21:58:58.689843 | orchestrator | Monday 19 May 2025 21:57:02 +0000 (0:00:01.306) 0:00:23.434 ************ 2025-05-19 21:58:58.689859 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 21:58:58.689871 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 21:58:58.689882 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-19 21:58:58.689893 | orchestrator | 2025-05-19 21:58:58.689903 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-19 21:58:58.689914 | orchestrator | Monday 19 May 2025 21:57:03 +0000 (0:00:01.827) 0:00:25.261 ************ 2025-05-19 21:58:58.689930 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 21:58:58.689941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 21:58:58.689952 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-19 21:58:58.689962 | orchestrator | 2025-05-19 21:58:58.689973 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-19 21:58:58.689984 | orchestrator | Monday 19 May 2025 21:57:05 +0000 (0:00:01.749) 0:00:27.011 ************ 2025-05-19 21:58:58.689995 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 21:58:58.690006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 21:58:58.690057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-19 21:58:58.690072 | orchestrator | 2025-05-19 21:58:58.690083 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-19 21:58:58.690130 | orchestrator | Monday 19 May 2025 21:57:08 +0000 (0:00:02.436) 0:00:29.448 ************ 2025-05-19 21:58:58.690142 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.690153 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:58.690164 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:58.690174 | orchestrator | 2025-05-19 21:58:58.690185 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-19 21:58:58.690196 | orchestrator | Monday 19 May 2025 21:57:08 +0000 (0:00:00.424) 0:00:29.872 ************ 2025-05-19 21:58:58.690208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.690221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.690264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 21:58:58.690277 | orchestrator | 2025-05-19 21:58:58.690288 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-19 21:58:58.690299 | orchestrator | Monday 19 May 2025 21:57:10 +0000 (0:00:01.858) 0:00:31.731 ************ 2025-05-19 21:58:58.690310 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:58.690327 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:58.690338 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:58.690349 | orchestrator | 2025-05-19 21:58:58.690360 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-19 21:58:58.690371 | orchestrator | Monday 19 May 2025 21:57:11 +0000 (0:00:01.294) 0:00:33.026 ************ 2025-05-19 21:58:58.690382 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:58.690393 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:58.690404 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:58.690414 | orchestrator | 2025-05-19 21:58:58.690425 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-19 21:58:58.690436 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:07.388) 0:00:40.414 ************ 2025-05-19 21:58:58.690447 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:58.690458 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:58.690468 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:58.690479 | orchestrator | 2025-05-19 21:58:58.690490 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 21:58:58.690501 | orchestrator | 2025-05-19 21:58:58.690512 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 21:58:58.690523 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:00.373) 0:00:40.788 ************ 2025-05-19 21:58:58.690534 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:58.690545 | orchestrator | 2025-05-19 21:58:58.690556 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 21:58:58.690567 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.585) 0:00:41.374 ************ 2025-05-19 21:58:58.690578 | orchestrator | skipping: [testbed-node-0] 2025-05-19 21:58:58.690589 | orchestrator | 2025-05-19 21:58:58.690600 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 21:58:58.690611 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.218) 0:00:41.592 ************ 2025-05-19 21:58:58.690622 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:58.690633 | orchestrator | 2025-05-19 21:58:58.690644 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 21:58:58.690655 | orchestrator | Monday 19 May 2025 21:57:22 +0000 (0:00:02.312) 0:00:43.905 ************ 2025-05-19 21:58:58.690666 | orchestrator | changed: [testbed-node-0] 2025-05-19 21:58:58.690676 | orchestrator | 2025-05-19 21:58:58.690687 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 21:58:58.690698 | orchestrator | 2025-05-19 21:58:58.690709 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 21:58:58.690720 | orchestrator | Monday 19 May 2025 21:58:16 +0000 (0:00:54.091) 0:01:37.997 ************ 2025-05-19 21:58:58.690731 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:58.690742 | orchestrator | 2025-05-19 21:58:58.690753 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 21:58:58.690763 | orchestrator | Monday 19 May 2025 21:58:17 +0000 (0:00:00.666) 0:01:38.664 ************ 2025-05-19 21:58:58.690774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 21:58:58.690785 | orchestrator | 2025-05-19 21:58:58.690796 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 21:58:58.690807 | orchestrator | Monday 19 May 2025 21:58:18 +0000 (0:00:00.856) 0:01:39.521 ************ 2025-05-19 21:58:58.690818 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:58.690829 | orchestrator | 2025-05-19 21:58:58.690840 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 21:58:58.690851 | orchestrator | Monday 19 May 2025 21:58:24 +0000 (0:00:06.732) 0:01:46.254 ************ 2025-05-19 21:58:58.690862 | orchestrator | changed: [testbed-node-1] 2025-05-19 21:58:58.690872 | orchestrator | 2025-05-19 21:58:58.690883 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-19 21:58:58.690894 | orchestrator | 2025-05-19 21:58:58.690905 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-19 21:58:58.690930 | orchestrator | Monday 19 May 2025 21:58:35 +0000 (0:00:10.245) 0:01:56.499 ************ 2025-05-19 21:58:58.690941 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:58.690952 | orchestrator | 2025-05-19 21:58:58.690963 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-19 21:58:58.690974 | orchestrator | Monday 19 May 2025 21:58:35 +0000 (0:00:00.621) 0:01:57.121 ************ 2025-05-19 21:58:58.690986 | orchestrator | skipping: [testbed-node-2] 2025-05-19 21:58:58.690997 | orchestrator | 2025-05-19 21:58:58.691007 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-19 21:58:58.691018 | orchestrator | Monday 19 May 2025 21:58:36 +0000 (0:00:00.229) 0:01:57.350 ************ 2025-05-19 21:58:58.691029 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:58.691040 | orchestrator | 2025-05-19 21:58:58.691051 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-19 21:58:58.691068 | orchestrator | Monday 19 May 2025 21:58:43 +0000 (0:00:07.006) 0:02:04.357 ************ 2025-05-19 21:58:58.691079 | orchestrator | changed: [testbed-node-2] 2025-05-19 21:58:58.691124 | orchestrator | 2025-05-19 21:58:58.691137 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-19 21:58:58.691148 | orchestrator | 2025-05-19 21:58:58.691158 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-19 21:58:58.691169 | orchestrator | Monday 19 May 2025 21:58:54 +0000 (0:00:11.090) 0:02:15.447 ************ 2025-05-19 21:58:58.691179 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 21:58:58.691190 | orchestrator | 2025-05-19 21:58:58.691206 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-19 21:58:58.691218 | orchestrator | Monday 19 May 2025 21:58:54 +0000 (0:00:00.681) 0:02:16.129 ************ 2025-05-19 21:58:58.691229 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 21:58:58.691239 | orchestrator | enable_outward_rabbitmq_True 2025-05-19 21:58:58.691250 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 21:58:58.691261 | orchestrator | outward_rabbitmq_restart 2025-05-19 21:58:58.691272 | orchestrator | ok: [testbed-node-2] 2025-05-19 21:58:58.691282 | orchestrator | ok: [testbed-node-0] 2025-05-19 21:58:58.691293 | orchestrator | ok: [testbed-node-1] 2025-05-19 21:58:58.691304 | orchestrator | 2025-05-19 21:58:58.691315 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-19 21:58:58.691326 | orchestrator | skipping: no hosts matched 2025-05-19 21:58:58.691336 | orchestrator | 2025-05-19 21:58:58.691347 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-19 21:58:58.691358 | orchestrator | skipping: no hosts matched 2025-05-19 21:58:58.691369 | orchestrator | 2025-05-19 21:58:58.691379 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-19 21:58:58.691390 | orchestrator | skipping: no hosts matched 2025-05-19 21:58:58.691401 | orchestrator | 2025-05-19 21:58:58.691411 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 21:58:58.691422 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 21:58:58.691434 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 21:58:58.691445 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:58:58.691456 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 21:58:58.691466 | orchestrator | 2025-05-19 21:58:58.691477 | orchestrator | 2025-05-19 21:58:58.691488 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 21:58:58.691507 | orchestrator | Monday 19 May 2025 21:58:57 +0000 (0:00:02.964) 0:02:19.093 ************ 2025-05-19 21:58:58.691518 | orchestrator | =============================================================================== 2025-05-19 21:58:58.691528 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.43s 2025-05-19 21:58:58.691539 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 16.05s 2025-05-19 21:58:58.691550 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.39s 2025-05-19 21:58:58.691560 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 5.13s 2025-05-19 21:58:58.691571 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.21s 2025-05-19 21:58:58.691582 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.96s 2025-05-19 21:58:58.691593 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.44s 2025-05-19 21:58:58.691603 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.30s 2025-05-19 21:58:58.691614 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.96s 2025-05-19 21:58:58.691625 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.91s 2025-05-19 21:58:58.691635 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.87s 2025-05-19 21:58:58.691646 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.86s 2025-05-19 21:58:58.691657 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.83s 2025-05-19 21:58:58.691667 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.75s 2025-05-19 21:58:58.691678 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.31s 2025-05-19 21:58:58.691689 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.30s 2025-05-19 21:58:58.691699 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.29s 2025-05-19 21:58:58.691710 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.10s 2025-05-19 21:58:58.691721 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.85s 2025-05-19 21:58:58.691731 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.80s 2025-05-19 21:58:58.691742 | orchestrator | 2025-05-19 21:58:58 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:58:58.691753 | orchestrator | 2025-05-19 21:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:01.732193 | orchestrator | 2025-05-19 21:59:01 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:01.732300 | orchestrator | 2025-05-19 21:59:01 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:01.732709 | orchestrator | 2025-05-19 21:59:01 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:01.732740 | orchestrator | 2025-05-19 21:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:04.770959 | orchestrator | 2025-05-19 21:59:04 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:04.771303 | orchestrator | 2025-05-19 21:59:04 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:04.771870 | orchestrator | 2025-05-19 21:59:04 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:04.771901 | orchestrator | 2025-05-19 21:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:07.823232 | orchestrator | 2025-05-19 21:59:07 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:07.824722 | orchestrator | 2025-05-19 21:59:07 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:07.828886 | orchestrator | 2025-05-19 21:59:07 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:07.828926 | orchestrator | 2025-05-19 21:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:10.864489 | orchestrator | 2025-05-19 21:59:10 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:10.864593 | orchestrator | 2025-05-19 21:59:10 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:10.865364 | orchestrator | 2025-05-19 21:59:10 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:10.865390 | orchestrator | 2025-05-19 21:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:13.900623 | orchestrator | 2025-05-19 21:59:13 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:13.904428 | orchestrator | 2025-05-19 21:59:13 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:13.904869 | orchestrator | 2025-05-19 21:59:13 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:13.904898 | orchestrator | 2025-05-19 21:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:16.947554 | orchestrator | 2025-05-19 21:59:16 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:16.949162 | orchestrator | 2025-05-19 21:59:16 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:16.953530 | orchestrator | 2025-05-19 21:59:16 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:16.954261 | orchestrator | 2025-05-19 21:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:20.007064 | orchestrator | 2025-05-19 21:59:20 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:20.009034 | orchestrator | 2025-05-19 21:59:20 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:20.011456 | orchestrator | 2025-05-19 21:59:20 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:20.011773 | orchestrator | 2025-05-19 21:59:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:23.073711 | orchestrator | 2025-05-19 21:59:23 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:23.074639 | orchestrator | 2025-05-19 21:59:23 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:23.076065 | orchestrator | 2025-05-19 21:59:23 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:23.076431 | orchestrator | 2025-05-19 21:59:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:26.128570 | orchestrator | 2025-05-19 21:59:26 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:26.129753 | orchestrator | 2025-05-19 21:59:26 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:26.131321 | orchestrator | 2025-05-19 21:59:26 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:26.131350 | orchestrator | 2025-05-19 21:59:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:29.164633 | orchestrator | 2025-05-19 21:59:29 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:29.165200 | orchestrator | 2025-05-19 21:59:29 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:29.170298 | orchestrator | 2025-05-19 21:59:29 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:29.170380 | orchestrator | 2025-05-19 21:59:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:32.229901 | orchestrator | 2025-05-19 21:59:32 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:32.231536 | orchestrator | 2025-05-19 21:59:32 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:32.233138 | orchestrator | 2025-05-19 21:59:32 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:32.233177 | orchestrator | 2025-05-19 21:59:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:35.276962 | orchestrator | 2025-05-19 21:59:35 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:35.277246 | orchestrator | 2025-05-19 21:59:35 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:35.278222 | orchestrator | 2025-05-19 21:59:35 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:35.278254 | orchestrator | 2025-05-19 21:59:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:38.320034 | orchestrator | 2025-05-19 21:59:38 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:38.322828 | orchestrator | 2025-05-19 21:59:38 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:38.324262 | orchestrator | 2025-05-19 21:59:38 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:38.324604 | orchestrator | 2025-05-19 21:59:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:41.367811 | orchestrator | 2025-05-19 21:59:41 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:41.371028 | orchestrator | 2025-05-19 21:59:41 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:41.371115 | orchestrator | 2025-05-19 21:59:41 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:41.371130 | orchestrator | 2025-05-19 21:59:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:44.424701 | orchestrator | 2025-05-19 21:59:44 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:44.428272 | orchestrator | 2025-05-19 21:59:44 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:44.429636 | orchestrator | 2025-05-19 21:59:44 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:44.429741 | orchestrator | 2025-05-19 21:59:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:47.475965 | orchestrator | 2025-05-19 21:59:47 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:47.477050 | orchestrator | 2025-05-19 21:59:47 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:47.478154 | orchestrator | 2025-05-19 21:59:47 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:47.478192 | orchestrator | 2025-05-19 21:59:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:50.520896 | orchestrator | 2025-05-19 21:59:50 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:50.521548 | orchestrator | 2025-05-19 21:59:50 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:50.523571 | orchestrator | 2025-05-19 21:59:50 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:50.523640 | orchestrator | 2025-05-19 21:59:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:53.580417 | orchestrator | 2025-05-19 21:59:53 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:53.584682 | orchestrator | 2025-05-19 21:59:53 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:53.586432 | orchestrator | 2025-05-19 21:59:53 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:53.586682 | orchestrator | 2025-05-19 21:59:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:56.641017 | orchestrator | 2025-05-19 21:59:56 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:56.642953 | orchestrator | 2025-05-19 21:59:56 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:56.644681 | orchestrator | 2025-05-19 21:59:56 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:56.644722 | orchestrator | 2025-05-19 21:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 21:59:59.716385 | orchestrator | 2025-05-19 21:59:59 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 21:59:59.717018 | orchestrator | 2025-05-19 21:59:59 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 21:59:59.718199 | orchestrator | 2025-05-19 21:59:59 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 21:59:59.718254 | orchestrator | 2025-05-19 21:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:02.774783 | orchestrator | 2025-05-19 22:00:02 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:02.776287 | orchestrator | 2025-05-19 22:00:02 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:02.778276 | orchestrator | 2025-05-19 22:00:02 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 22:00:02.778380 | orchestrator | 2025-05-19 22:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:05.822519 | orchestrator | 2025-05-19 22:00:05 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:05.822629 | orchestrator | 2025-05-19 22:00:05 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:05.823375 | orchestrator | 2025-05-19 22:00:05 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state STARTED 2025-05-19 22:00:05.824212 | orchestrator | 2025-05-19 22:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:08.867890 | orchestrator | 2025-05-19 22:00:08 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:08.869178 | orchestrator | 2025-05-19 22:00:08 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:08.871768 | orchestrator | 2025-05-19 22:00:08 | INFO  | Task 59a240c4-2926-4643-8b94-a8b15e500db8 is in state SUCCESS 2025-05-19 22:00:08.875512 | orchestrator | 2025-05-19 22:00:08.875603 | orchestrator | 2025-05-19 22:00:08.875612 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:00:08.875620 | orchestrator | 2025-05-19 22:00:08.875628 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:00:08.875636 | orchestrator | Monday 19 May 2025 21:57:34 +0000 (0:00:00.306) 0:00:00.306 ************ 2025-05-19 22:00:08.875644 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.875652 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.875658 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.875665 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:00:08.875689 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:00:08.875696 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:00:08.875703 | orchestrator | 2025-05-19 22:00:08.875710 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:00:08.875718 | orchestrator | Monday 19 May 2025 21:57:34 +0000 (0:00:00.767) 0:00:01.074 ************ 2025-05-19 22:00:08.875725 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-19 22:00:08.875733 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-19 22:00:08.875740 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-19 22:00:08.875747 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-19 22:00:08.875755 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-19 22:00:08.875762 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-19 22:00:08.875769 | orchestrator | 2025-05-19 22:00:08.875776 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-19 22:00:08.875783 | orchestrator | 2025-05-19 22:00:08.875790 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-19 22:00:08.875797 | orchestrator | Monday 19 May 2025 21:57:36 +0000 (0:00:01.143) 0:00:02.217 ************ 2025-05-19 22:00:08.875806 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:00:08.875815 | orchestrator | 2025-05-19 22:00:08.875822 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-19 22:00:08.875829 | orchestrator | Monday 19 May 2025 21:57:37 +0000 (0:00:01.334) 0:00:03.552 ************ 2025-05-19 22:00:08.875839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875885 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875898 | orchestrator | 2025-05-19 22:00:08.875915 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-19 22:00:08.875922 | orchestrator | Monday 19 May 2025 21:57:38 +0000 (0:00:01.213) 0:00:04.765 ************ 2025-05-19 22:00:08.875929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875967 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.875974 | orchestrator | 2025-05-19 22:00:08.876018 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-19 22:00:08.876028 | orchestrator | Monday 19 May 2025 21:57:40 +0000 (0:00:01.805) 0:00:06.571 ************ 2025-05-19 22:00:08.876036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876118 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876126 | orchestrator | 2025-05-19 22:00:08.876134 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-19 22:00:08.876142 | orchestrator | Monday 19 May 2025 21:57:41 +0000 (0:00:01.129) 0:00:07.701 ************ 2025-05-19 22:00:08.876150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876203 | orchestrator | 2025-05-19 22:00:08.876213 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-19 22:00:08.876221 | orchestrator | Monday 19 May 2025 21:57:43 +0000 (0:00:01.864) 0:00:09.565 ************ 2025-05-19 22:00:08.876228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.876282 | orchestrator | 2025-05-19 22:00:08.876290 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-19 22:00:08.876298 | orchestrator | Monday 19 May 2025 21:57:45 +0000 (0:00:01.770) 0:00:11.336 ************ 2025-05-19 22:00:08.876306 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.876315 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.876323 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.876331 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:00:08.876339 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:00:08.876347 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:00:08.876354 | orchestrator | 2025-05-19 22:00:08.876362 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-19 22:00:08.876370 | orchestrator | Monday 19 May 2025 21:57:47 +0000 (0:00:02.410) 0:00:13.747 ************ 2025-05-19 22:00:08.876378 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-19 22:00:08.876387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-19 22:00:08.876395 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-19 22:00:08.876403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-19 22:00:08.876411 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-19 22:00:08.876418 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-19 22:00:08.876425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876433 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876444 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876451 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876458 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876465 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-19 22:00:08.876472 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876489 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876503 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876510 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-19 22:00:08.876517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876532 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876539 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876546 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876559 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-19 22:00:08.876566 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876580 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876587 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876604 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-19 22:00:08.876611 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876618 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876625 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876632 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876639 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876646 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-19 22:00:08.876653 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 22:00:08.876659 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 22:00:08.876665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 22:00:08.876672 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-19 22:00:08.876679 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 22:00:08.876686 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-19 22:00:08.876693 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-19 22:00:08.876701 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-19 22:00:08.876712 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-19 22:00:08.876719 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-19 22:00:08.876726 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-19 22:00:08.876733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-19 22:00:08.876740 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 22:00:08.876747 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 22:00:08.876754 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 22:00:08.876767 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 22:00:08.876774 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-19 22:00:08.876781 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-19 22:00:08.876788 | orchestrator | 2025-05-19 22:00:08.876795 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876802 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:18.732) 0:00:32.479 ************ 2025-05-19 22:00:08.876809 | orchestrator | 2025-05-19 22:00:08.876817 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876824 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:00.186) 0:00:32.666 ************ 2025-05-19 22:00:08.876831 | orchestrator | 2025-05-19 22:00:08.876838 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876845 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:00.173) 0:00:32.840 ************ 2025-05-19 22:00:08.876852 | orchestrator | 2025-05-19 22:00:08.876859 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876866 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:00.142) 0:00:32.982 ************ 2025-05-19 22:00:08.876873 | orchestrator | 2025-05-19 22:00:08.876880 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876888 | orchestrator | Monday 19 May 2025 21:58:07 +0000 (0:00:00.182) 0:00:33.164 ************ 2025-05-19 22:00:08.876895 | orchestrator | 2025-05-19 22:00:08.876902 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-19 22:00:08.876909 | orchestrator | Monday 19 May 2025 21:58:07 +0000 (0:00:00.209) 0:00:33.374 ************ 2025-05-19 22:00:08.876915 | orchestrator | 2025-05-19 22:00:08.876922 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-19 22:00:08.876929 | orchestrator | Monday 19 May 2025 21:58:07 +0000 (0:00:00.196) 0:00:33.570 ************ 2025-05-19 22:00:08.876937 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.876947 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.876954 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.876961 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:00:08.876968 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:00:08.876975 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:00:08.876982 | orchestrator | 2025-05-19 22:00:08.876989 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-19 22:00:08.876996 | orchestrator | Monday 19 May 2025 21:58:10 +0000 (0:00:03.414) 0:00:36.987 ************ 2025-05-19 22:00:08.877003 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.877010 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.877017 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:00:08.877024 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.877032 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:00:08.877038 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:00:08.877058 | orchestrator | 2025-05-19 22:00:08.877065 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-19 22:00:08.877072 | orchestrator | 2025-05-19 22:00:08.877078 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 22:00:08.877084 | orchestrator | Monday 19 May 2025 21:58:49 +0000 (0:00:38.539) 0:01:15.526 ************ 2025-05-19 22:00:08.877091 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:00:08.877099 | orchestrator | 2025-05-19 22:00:08.877105 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 22:00:08.877112 | orchestrator | Monday 19 May 2025 21:58:49 +0000 (0:00:00.559) 0:01:16.085 ************ 2025-05-19 22:00:08.877120 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:00:08.877132 | orchestrator | 2025-05-19 22:00:08.877139 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-19 22:00:08.877146 | orchestrator | Monday 19 May 2025 21:58:50 +0000 (0:00:00.770) 0:01:16.856 ************ 2025-05-19 22:00:08.877153 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877160 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877168 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877175 | orchestrator | 2025-05-19 22:00:08.877182 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-19 22:00:08.877189 | orchestrator | Monday 19 May 2025 21:58:51 +0000 (0:00:00.790) 0:01:17.646 ************ 2025-05-19 22:00:08.877196 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877203 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877210 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877220 | orchestrator | 2025-05-19 22:00:08.877228 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-19 22:00:08.877235 | orchestrator | Monday 19 May 2025 21:58:51 +0000 (0:00:00.303) 0:01:17.950 ************ 2025-05-19 22:00:08.877242 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877249 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877256 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877263 | orchestrator | 2025-05-19 22:00:08.877269 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-19 22:00:08.877277 | orchestrator | Monday 19 May 2025 21:58:52 +0000 (0:00:00.292) 0:01:18.243 ************ 2025-05-19 22:00:08.877284 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877290 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877297 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877304 | orchestrator | 2025-05-19 22:00:08.877311 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-19 22:00:08.877318 | orchestrator | Monday 19 May 2025 21:58:52 +0000 (0:00:00.532) 0:01:18.775 ************ 2025-05-19 22:00:08.877325 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877332 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877339 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877346 | orchestrator | 2025-05-19 22:00:08.877353 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-19 22:00:08.877360 | orchestrator | Monday 19 May 2025 21:58:52 +0000 (0:00:00.327) 0:01:19.102 ************ 2025-05-19 22:00:08.877367 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877374 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877381 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877388 | orchestrator | 2025-05-19 22:00:08.877395 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-19 22:00:08.877402 | orchestrator | Monday 19 May 2025 21:58:53 +0000 (0:00:00.297) 0:01:19.400 ************ 2025-05-19 22:00:08.877409 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877416 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877423 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877430 | orchestrator | 2025-05-19 22:00:08.877437 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-19 22:00:08.877444 | orchestrator | Monday 19 May 2025 21:58:53 +0000 (0:00:00.324) 0:01:19.724 ************ 2025-05-19 22:00:08.877452 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877459 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877473 | orchestrator | 2025-05-19 22:00:08.877480 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-19 22:00:08.877487 | orchestrator | Monday 19 May 2025 21:58:54 +0000 (0:00:00.693) 0:01:20.418 ************ 2025-05-19 22:00:08.877494 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877501 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877508 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877515 | orchestrator | 2025-05-19 22:00:08.877522 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-19 22:00:08.877534 | orchestrator | Monday 19 May 2025 21:58:54 +0000 (0:00:00.327) 0:01:20.745 ************ 2025-05-19 22:00:08.877541 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877548 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877555 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877563 | orchestrator | 2025-05-19 22:00:08.877570 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-19 22:00:08.877577 | orchestrator | Monday 19 May 2025 21:58:54 +0000 (0:00:00.319) 0:01:21.064 ************ 2025-05-19 22:00:08.877584 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877591 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877601 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877608 | orchestrator | 2025-05-19 22:00:08.877615 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-19 22:00:08.877621 | orchestrator | Monday 19 May 2025 21:58:55 +0000 (0:00:00.302) 0:01:21.367 ************ 2025-05-19 22:00:08.877629 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877636 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877643 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877649 | orchestrator | 2025-05-19 22:00:08.877656 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-19 22:00:08.877662 | orchestrator | Monday 19 May 2025 21:58:55 +0000 (0:00:00.476) 0:01:21.844 ************ 2025-05-19 22:00:08.877669 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877676 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877683 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877690 | orchestrator | 2025-05-19 22:00:08.877697 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-19 22:00:08.877704 | orchestrator | Monday 19 May 2025 21:58:56 +0000 (0:00:00.332) 0:01:22.176 ************ 2025-05-19 22:00:08.877711 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877719 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877725 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877732 | orchestrator | 2025-05-19 22:00:08.877740 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-19 22:00:08.877747 | orchestrator | Monday 19 May 2025 21:58:56 +0000 (0:00:00.300) 0:01:22.476 ************ 2025-05-19 22:00:08.877754 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877761 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877768 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877775 | orchestrator | 2025-05-19 22:00:08.877782 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-19 22:00:08.877789 | orchestrator | Monday 19 May 2025 21:58:56 +0000 (0:00:00.332) 0:01:22.809 ************ 2025-05-19 22:00:08.877796 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877803 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877810 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877817 | orchestrator | 2025-05-19 22:00:08.877824 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-19 22:00:08.877831 | orchestrator | Monday 19 May 2025 21:58:57 +0000 (0:00:00.489) 0:01:23.299 ************ 2025-05-19 22:00:08.877839 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.877846 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.877856 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.877864 | orchestrator | 2025-05-19 22:00:08.877871 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-19 22:00:08.877878 | orchestrator | Monday 19 May 2025 21:58:57 +0000 (0:00:00.363) 0:01:23.663 ************ 2025-05-19 22:00:08.877885 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:00:08.877892 | orchestrator | 2025-05-19 22:00:08.877899 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-19 22:00:08.877910 | orchestrator | Monday 19 May 2025 21:58:58 +0000 (0:00:00.614) 0:01:24.278 ************ 2025-05-19 22:00:08.877917 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877925 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877931 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877938 | orchestrator | 2025-05-19 22:00:08.877945 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-19 22:00:08.877953 | orchestrator | Monday 19 May 2025 21:58:59 +0000 (0:00:00.920) 0:01:25.198 ************ 2025-05-19 22:00:08.877960 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.877967 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.877974 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.877981 | orchestrator | 2025-05-19 22:00:08.877988 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-19 22:00:08.877995 | orchestrator | Monday 19 May 2025 21:58:59 +0000 (0:00:00.823) 0:01:26.022 ************ 2025-05-19 22:00:08.878002 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878009 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878092 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878102 | orchestrator | 2025-05-19 22:00:08.878109 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-19 22:00:08.878116 | orchestrator | Monday 19 May 2025 21:59:00 +0000 (0:00:00.487) 0:01:26.510 ************ 2025-05-19 22:00:08.878122 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878129 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878135 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878142 | orchestrator | 2025-05-19 22:00:08.878149 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-19 22:00:08.878155 | orchestrator | Monday 19 May 2025 21:59:00 +0000 (0:00:00.368) 0:01:26.878 ************ 2025-05-19 22:00:08.878162 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878168 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878175 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878181 | orchestrator | 2025-05-19 22:00:08.878188 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-19 22:00:08.878194 | orchestrator | Monday 19 May 2025 21:59:01 +0000 (0:00:00.634) 0:01:27.513 ************ 2025-05-19 22:00:08.878201 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878207 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878214 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878221 | orchestrator | 2025-05-19 22:00:08.878227 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-19 22:00:08.878234 | orchestrator | Monday 19 May 2025 21:59:01 +0000 (0:00:00.360) 0:01:27.873 ************ 2025-05-19 22:00:08.878240 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878247 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878253 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878260 | orchestrator | 2025-05-19 22:00:08.878267 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-19 22:00:08.878273 | orchestrator | Monday 19 May 2025 21:59:02 +0000 (0:00:00.490) 0:01:28.363 ************ 2025-05-19 22:00:08.878279 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.878292 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.878299 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.878305 | orchestrator | 2025-05-19 22:00:08.878312 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 22:00:08.878318 | orchestrator | Monday 19 May 2025 21:59:02 +0000 (0:00:00.324) 0:01:28.688 ************ 2025-05-19 22:00:08.878326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2025-05-19 22:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:08.878575 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878728 | orchestrator | 2025-05-19 22:00:08.878740 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 22:00:08.878752 | orchestrator | Monday 19 May 2025 21:59:04 +0000 (0:00:01.925) 0:01:30.614 ************ 2025-05-19 22:00:08.878780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.878929 | orchestrator | 2025-05-19 22:00:08.878950 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 22:00:08.878963 | orchestrator | Monday 19 May 2025 21:59:08 +0000 (0:00:03.845) 0:01:34.460 ************ 2025-05-19 22:00:08.878975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.879133 | orchestrator | 2025-05-19 22:00:08.879146 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.879159 | orchestrator | Monday 19 May 2025 21:59:10 +0000 (0:00:02.059) 0:01:36.519 ************ 2025-05-19 22:00:08.879173 | orchestrator | 2025-05-19 22:00:08.879185 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.879199 | orchestrator | Monday 19 May 2025 21:59:10 +0000 (0:00:00.144) 0:01:36.664 ************ 2025-05-19 22:00:08.879212 | orchestrator | 2025-05-19 22:00:08.879223 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.879234 | orchestrator | Monday 19 May 2025 21:59:10 +0000 (0:00:00.133) 0:01:36.798 ************ 2025-05-19 22:00:08.879252 | orchestrator | 2025-05-19 22:00:08.879263 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 22:00:08.879275 | orchestrator | Monday 19 May 2025 21:59:10 +0000 (0:00:00.102) 0:01:36.901 ************ 2025-05-19 22:00:08.879286 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.879297 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.879308 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.879319 | orchestrator | 2025-05-19 22:00:08.879330 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 22:00:08.879341 | orchestrator | Monday 19 May 2025 21:59:18 +0000 (0:00:07.825) 0:01:44.726 ************ 2025-05-19 22:00:08.879353 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.879364 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.879376 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.879386 | orchestrator | 2025-05-19 22:00:08.879403 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 22:00:08.879414 | orchestrator | Monday 19 May 2025 21:59:21 +0000 (0:00:02.709) 0:01:47.436 ************ 2025-05-19 22:00:08.879426 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.879437 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.879447 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.879458 | orchestrator | 2025-05-19 22:00:08.879469 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 22:00:08.879480 | orchestrator | Monday 19 May 2025 21:59:29 +0000 (0:00:07.684) 0:01:55.121 ************ 2025-05-19 22:00:08.879491 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.879502 | orchestrator | 2025-05-19 22:00:08.879513 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 22:00:08.879524 | orchestrator | Monday 19 May 2025 21:59:29 +0000 (0:00:00.122) 0:01:55.243 ************ 2025-05-19 22:00:08.879535 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.879546 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.879557 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.879568 | orchestrator | 2025-05-19 22:00:08.879579 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 22:00:08.879590 | orchestrator | Monday 19 May 2025 21:59:30 +0000 (0:00:00.966) 0:01:56.210 ************ 2025-05-19 22:00:08.879601 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.879612 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.879623 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.879633 | orchestrator | 2025-05-19 22:00:08.879644 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 22:00:08.879656 | orchestrator | Monday 19 May 2025 21:59:31 +0000 (0:00:00.922) 0:01:57.132 ************ 2025-05-19 22:00:08.879667 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.879678 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.879689 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.879700 | orchestrator | 2025-05-19 22:00:08.879710 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 22:00:08.879722 | orchestrator | Monday 19 May 2025 21:59:31 +0000 (0:00:00.883) 0:01:58.015 ************ 2025-05-19 22:00:08.879733 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.879744 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.879755 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.879766 | orchestrator | 2025-05-19 22:00:08.879777 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 22:00:08.879789 | orchestrator | Monday 19 May 2025 21:59:32 +0000 (0:00:00.828) 0:01:58.844 ************ 2025-05-19 22:00:08.879800 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.879811 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.879828 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.879840 | orchestrator | 2025-05-19 22:00:08.879851 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 22:00:08.879862 | orchestrator | Monday 19 May 2025 21:59:33 +0000 (0:00:00.980) 0:01:59.824 ************ 2025-05-19 22:00:08.879881 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.879892 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.879903 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.879913 | orchestrator | 2025-05-19 22:00:08.879924 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-19 22:00:08.879935 | orchestrator | Monday 19 May 2025 21:59:35 +0000 (0:00:01.356) 0:02:01.180 ************ 2025-05-19 22:00:08.879946 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.879957 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.879968 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.879979 | orchestrator | 2025-05-19 22:00:08.879990 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-19 22:00:08.880001 | orchestrator | Monday 19 May 2025 21:59:35 +0000 (0:00:00.419) 0:02:01.599 ************ 2025-05-19 22:00:08.880012 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880024 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880035 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880076 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880129 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880161 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880173 | orchestrator | 2025-05-19 22:00:08.880184 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-19 22:00:08.880195 | orchestrator | Monday 19 May 2025 21:59:36 +0000 (0:00:01.451) 0:02:03.051 ************ 2025-05-19 22:00:08.880207 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880219 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880230 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880281 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880323 | orchestrator | 2025-05-19 22:00:08.880335 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-19 22:00:08.880346 | orchestrator | Monday 19 May 2025 21:59:41 +0000 (0:00:04.413) 0:02:07.464 ************ 2025-05-19 22:00:08.880364 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880376 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880411 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880452 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:00:08.880481 | orchestrator | 2025-05-19 22:00:08.880493 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.880504 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:02.953) 0:02:10.418 ************ 2025-05-19 22:00:08.880515 | orchestrator | 2025-05-19 22:00:08.880526 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.880537 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:00.070) 0:02:10.488 ************ 2025-05-19 22:00:08.880548 | orchestrator | 2025-05-19 22:00:08.880559 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-19 22:00:08.880570 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:00.066) 0:02:10.554 ************ 2025-05-19 22:00:08.880581 | orchestrator | 2025-05-19 22:00:08.880592 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-19 22:00:08.880603 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:00.064) 0:02:10.619 ************ 2025-05-19 22:00:08.880614 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.880625 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.880636 | orchestrator | 2025-05-19 22:00:08.880657 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-19 22:00:08.880678 | orchestrator | Monday 19 May 2025 21:59:50 +0000 (0:00:06.222) 0:02:16.842 ************ 2025-05-19 22:00:08.880698 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.880716 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.880732 | orchestrator | 2025-05-19 22:00:08.880751 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-19 22:00:08.880769 | orchestrator | Monday 19 May 2025 21:59:56 +0000 (0:00:06.229) 0:02:23.072 ************ 2025-05-19 22:00:08.880787 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:00:08.880805 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:00:08.880825 | orchestrator | 2025-05-19 22:00:08.880843 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-19 22:00:08.880863 | orchestrator | Monday 19 May 2025 22:00:03 +0000 (0:00:06.136) 0:02:29.208 ************ 2025-05-19 22:00:08.880882 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:00:08.880901 | orchestrator | 2025-05-19 22:00:08.880922 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-19 22:00:08.880942 | orchestrator | Monday 19 May 2025 22:00:03 +0000 (0:00:00.145) 0:02:29.354 ************ 2025-05-19 22:00:08.880962 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.880978 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.880990 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.881001 | orchestrator | 2025-05-19 22:00:08.881012 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-19 22:00:08.881023 | orchestrator | Monday 19 May 2025 22:00:04 +0000 (0:00:01.005) 0:02:30.359 ************ 2025-05-19 22:00:08.881034 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.881087 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.881102 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.881113 | orchestrator | 2025-05-19 22:00:08.881124 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-19 22:00:08.881135 | orchestrator | Monday 19 May 2025 22:00:04 +0000 (0:00:00.625) 0:02:30.984 ************ 2025-05-19 22:00:08.881146 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.881158 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.881168 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.881179 | orchestrator | 2025-05-19 22:00:08.881190 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-19 22:00:08.881214 | orchestrator | Monday 19 May 2025 22:00:05 +0000 (0:00:00.732) 0:02:31.717 ************ 2025-05-19 22:00:08.881225 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:00:08.881236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:00:08.881247 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:00:08.881257 | orchestrator | 2025-05-19 22:00:08.881268 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-19 22:00:08.881279 | orchestrator | Monday 19 May 2025 22:00:06 +0000 (0:00:00.641) 0:02:32.358 ************ 2025-05-19 22:00:08.881290 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.881301 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.881312 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.881323 | orchestrator | 2025-05-19 22:00:08.881334 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-19 22:00:08.881346 | orchestrator | Monday 19 May 2025 22:00:07 +0000 (0:00:01.062) 0:02:33.420 ************ 2025-05-19 22:00:08.881357 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:00:08.881368 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:00:08.881379 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:00:08.881390 | orchestrator | 2025-05-19 22:00:08.881402 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:00:08.881425 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 22:00:08.881438 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 22:00:08.881449 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-19 22:00:08.881460 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:00:08.881471 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:00:08.881482 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:00:08.881493 | orchestrator | 2025-05-19 22:00:08.881504 | orchestrator | 2025-05-19 22:00:08.881515 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:00:08.881526 | orchestrator | Monday 19 May 2025 22:00:08 +0000 (0:00:00.876) 0:02:34.297 ************ 2025-05-19 22:00:08.881537 | orchestrator | =============================================================================== 2025-05-19 22:00:08.881548 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.54s 2025-05-19 22:00:08.881559 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.73s 2025-05-19 22:00:08.881571 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.05s 2025-05-19 22:00:08.881582 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.82s 2025-05-19 22:00:08.881593 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.94s 2025-05-19 22:00:08.881604 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.41s 2025-05-19 22:00:08.881615 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.85s 2025-05-19 22:00:08.881637 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 3.41s 2025-05-19 22:00:08.881649 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-05-19 22:00:08.881660 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.41s 2025-05-19 22:00:08.881671 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.06s 2025-05-19 22:00:08.881682 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.93s 2025-05-19 22:00:08.881701 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.86s 2025-05-19 22:00:08.881712 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.81s 2025-05-19 22:00:08.881723 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.77s 2025-05-19 22:00:08.881734 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-05-19 22:00:08.881745 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.36s 2025-05-19 22:00:08.881756 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.33s 2025-05-19 22:00:08.881768 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.21s 2025-05-19 22:00:08.881779 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.14s 2025-05-19 22:00:11.923165 | orchestrator | 2025-05-19 22:00:11 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:11.923276 | orchestrator | 2025-05-19 22:00:11 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:11.923292 | orchestrator | 2025-05-19 22:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:14.985528 | orchestrator | 2025-05-19 22:00:14 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:14.988344 | orchestrator | 2025-05-19 22:00:14 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:14.988394 | orchestrator | 2025-05-19 22:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:18.037263 | orchestrator | 2025-05-19 22:00:18 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:18.038195 | orchestrator | 2025-05-19 22:00:18 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:18.038226 | orchestrator | 2025-05-19 22:00:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:21.096528 | orchestrator | 2025-05-19 22:00:21 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:21.097640 | orchestrator | 2025-05-19 22:00:21 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:21.097896 | orchestrator | 2025-05-19 22:00:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:24.135573 | orchestrator | 2025-05-19 22:00:24 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:24.137170 | orchestrator | 2025-05-19 22:00:24 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:24.137209 | orchestrator | 2025-05-19 22:00:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:27.193400 | orchestrator | 2025-05-19 22:00:27 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:27.193511 | orchestrator | 2025-05-19 22:00:27 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:27.193526 | orchestrator | 2025-05-19 22:00:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:30.247402 | orchestrator | 2025-05-19 22:00:30 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:30.251898 | orchestrator | 2025-05-19 22:00:30 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:30.252961 | orchestrator | 2025-05-19 22:00:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:33.290850 | orchestrator | 2025-05-19 22:00:33 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:33.292127 | orchestrator | 2025-05-19 22:00:33 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:33.293778 | orchestrator | 2025-05-19 22:00:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:36.339365 | orchestrator | 2025-05-19 22:00:36 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:36.340018 | orchestrator | 2025-05-19 22:00:36 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:36.340091 | orchestrator | 2025-05-19 22:00:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:39.388656 | orchestrator | 2025-05-19 22:00:39 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:39.390263 | orchestrator | 2025-05-19 22:00:39 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:39.390755 | orchestrator | 2025-05-19 22:00:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:42.445260 | orchestrator | 2025-05-19 22:00:42 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:42.446876 | orchestrator | 2025-05-19 22:00:42 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:42.447556 | orchestrator | 2025-05-19 22:00:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:45.493332 | orchestrator | 2025-05-19 22:00:45 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:45.495135 | orchestrator | 2025-05-19 22:00:45 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:45.495174 | orchestrator | 2025-05-19 22:00:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:48.547112 | orchestrator | 2025-05-19 22:00:48 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:48.547223 | orchestrator | 2025-05-19 22:00:48 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:48.547238 | orchestrator | 2025-05-19 22:00:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:51.600533 | orchestrator | 2025-05-19 22:00:51 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:51.601811 | orchestrator | 2025-05-19 22:00:51 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:51.602231 | orchestrator | 2025-05-19 22:00:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:54.656874 | orchestrator | 2025-05-19 22:00:54 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:54.659144 | orchestrator | 2025-05-19 22:00:54 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:54.659246 | orchestrator | 2025-05-19 22:00:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:00:57.725532 | orchestrator | 2025-05-19 22:00:57 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:00:57.725623 | orchestrator | 2025-05-19 22:00:57 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:00:57.725634 | orchestrator | 2025-05-19 22:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:00.778615 | orchestrator | 2025-05-19 22:01:00 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:00.779252 | orchestrator | 2025-05-19 22:01:00 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:00.779860 | orchestrator | 2025-05-19 22:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:03.834730 | orchestrator | 2025-05-19 22:01:03 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:03.835766 | orchestrator | 2025-05-19 22:01:03 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:03.835810 | orchestrator | 2025-05-19 22:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:06.889446 | orchestrator | 2025-05-19 22:01:06 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:06.892672 | orchestrator | 2025-05-19 22:01:06 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:06.892724 | orchestrator | 2025-05-19 22:01:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:09.938987 | orchestrator | 2025-05-19 22:01:09 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:09.940770 | orchestrator | 2025-05-19 22:01:09 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:09.941076 | orchestrator | 2025-05-19 22:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:12.996952 | orchestrator | 2025-05-19 22:01:12 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:12.998412 | orchestrator | 2025-05-19 22:01:12 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:12.998469 | orchestrator | 2025-05-19 22:01:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:16.059833 | orchestrator | 2025-05-19 22:01:16 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:16.061835 | orchestrator | 2025-05-19 22:01:16 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:16.061857 | orchestrator | 2025-05-19 22:01:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:19.105307 | orchestrator | 2025-05-19 22:01:19 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:19.108151 | orchestrator | 2025-05-19 22:01:19 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:19.108268 | orchestrator | 2025-05-19 22:01:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:22.160832 | orchestrator | 2025-05-19 22:01:22 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:22.160935 | orchestrator | 2025-05-19 22:01:22 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:22.160950 | orchestrator | 2025-05-19 22:01:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:25.208737 | orchestrator | 2025-05-19 22:01:25 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:25.209971 | orchestrator | 2025-05-19 22:01:25 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:25.210160 | orchestrator | 2025-05-19 22:01:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:28.266375 | orchestrator | 2025-05-19 22:01:28 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:28.266962 | orchestrator | 2025-05-19 22:01:28 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:28.267649 | orchestrator | 2025-05-19 22:01:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:31.312304 | orchestrator | 2025-05-19 22:01:31 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:31.313318 | orchestrator | 2025-05-19 22:01:31 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:31.313358 | orchestrator | 2025-05-19 22:01:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:34.361895 | orchestrator | 2025-05-19 22:01:34 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:34.362428 | orchestrator | 2025-05-19 22:01:34 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:34.362473 | orchestrator | 2025-05-19 22:01:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:37.420132 | orchestrator | 2025-05-19 22:01:37 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:37.423582 | orchestrator | 2025-05-19 22:01:37 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:37.423624 | orchestrator | 2025-05-19 22:01:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:40.482683 | orchestrator | 2025-05-19 22:01:40 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:40.482828 | orchestrator | 2025-05-19 22:01:40 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:40.482845 | orchestrator | 2025-05-19 22:01:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:43.533734 | orchestrator | 2025-05-19 22:01:43 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:43.534919 | orchestrator | 2025-05-19 22:01:43 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:43.534953 | orchestrator | 2025-05-19 22:01:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:46.593884 | orchestrator | 2025-05-19 22:01:46 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:46.594403 | orchestrator | 2025-05-19 22:01:46 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:46.594439 | orchestrator | 2025-05-19 22:01:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:49.644839 | orchestrator | 2025-05-19 22:01:49 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:49.644938 | orchestrator | 2025-05-19 22:01:49 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:49.644952 | orchestrator | 2025-05-19 22:01:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:52.699281 | orchestrator | 2025-05-19 22:01:52 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:52.701118 | orchestrator | 2025-05-19 22:01:52 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:52.701197 | orchestrator | 2025-05-19 22:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:55.758765 | orchestrator | 2025-05-19 22:01:55 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:55.760510 | orchestrator | 2025-05-19 22:01:55 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:01:55.764359 | orchestrator | 2025-05-19 22:01:55 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:55.764436 | orchestrator | 2025-05-19 22:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:01:58.810111 | orchestrator | 2025-05-19 22:01:58 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:01:58.810209 | orchestrator | 2025-05-19 22:01:58 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:01:58.810618 | orchestrator | 2025-05-19 22:01:58 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:01:58.810641 | orchestrator | 2025-05-19 22:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:01.851398 | orchestrator | 2025-05-19 22:02:01 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:01.851609 | orchestrator | 2025-05-19 22:02:01 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:02:01.857827 | orchestrator | 2025-05-19 22:02:01 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:01.857874 | orchestrator | 2025-05-19 22:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:04.922191 | orchestrator | 2025-05-19 22:02:04 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:04.924301 | orchestrator | 2025-05-19 22:02:04 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:02:04.924890 | orchestrator | 2025-05-19 22:02:04 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:04.924909 | orchestrator | 2025-05-19 22:02:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:07.964779 | orchestrator | 2025-05-19 22:02:07 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:07.964931 | orchestrator | 2025-05-19 22:02:07 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:02:07.965810 | orchestrator | 2025-05-19 22:02:07 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:07.965851 | orchestrator | 2025-05-19 22:02:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:11.010871 | orchestrator | 2025-05-19 22:02:11 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:11.012831 | orchestrator | 2025-05-19 22:02:11 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state STARTED 2025-05-19 22:02:11.014530 | orchestrator | 2025-05-19 22:02:11 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:11.014762 | orchestrator | 2025-05-19 22:02:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:14.071566 | orchestrator | 2025-05-19 22:02:14 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:14.073563 | orchestrator | 2025-05-19 22:02:14 | INFO  | Task db967412-d9c8-4d9c-8b82-c601dc9d6544 is in state SUCCESS 2025-05-19 22:02:14.079086 | orchestrator | 2025-05-19 22:02:14 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:14.079147 | orchestrator | 2025-05-19 22:02:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:17.129266 | orchestrator | 2025-05-19 22:02:17 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:17.129368 | orchestrator | 2025-05-19 22:02:17 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:17.129382 | orchestrator | 2025-05-19 22:02:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:20.177091 | orchestrator | 2025-05-19 22:02:20 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:20.177610 | orchestrator | 2025-05-19 22:02:20 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:20.177656 | orchestrator | 2025-05-19 22:02:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:23.219910 | orchestrator | 2025-05-19 22:02:23 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:23.221081 | orchestrator | 2025-05-19 22:02:23 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:23.221155 | orchestrator | 2025-05-19 22:02:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:26.265379 | orchestrator | 2025-05-19 22:02:26 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:26.266572 | orchestrator | 2025-05-19 22:02:26 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:26.266606 | orchestrator | 2025-05-19 22:02:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:29.314871 | orchestrator | 2025-05-19 22:02:29 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:29.316486 | orchestrator | 2025-05-19 22:02:29 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:29.316521 | orchestrator | 2025-05-19 22:02:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:32.366348 | orchestrator | 2025-05-19 22:02:32 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:32.367220 | orchestrator | 2025-05-19 22:02:32 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:32.367282 | orchestrator | 2025-05-19 22:02:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:35.418741 | orchestrator | 2025-05-19 22:02:35 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:35.420910 | orchestrator | 2025-05-19 22:02:35 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:35.421733 | orchestrator | 2025-05-19 22:02:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:38.454130 | orchestrator | 2025-05-19 22:02:38 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:38.454324 | orchestrator | 2025-05-19 22:02:38 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:38.456144 | orchestrator | 2025-05-19 22:02:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:41.500023 | orchestrator | 2025-05-19 22:02:41 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:41.501396 | orchestrator | 2025-05-19 22:02:41 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:41.501443 | orchestrator | 2025-05-19 22:02:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:44.551178 | orchestrator | 2025-05-19 22:02:44 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:44.552872 | orchestrator | 2025-05-19 22:02:44 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:44.553286 | orchestrator | 2025-05-19 22:02:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:47.591199 | orchestrator | 2025-05-19 22:02:47 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:47.591305 | orchestrator | 2025-05-19 22:02:47 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:47.591321 | orchestrator | 2025-05-19 22:02:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:50.641110 | orchestrator | 2025-05-19 22:02:50 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:50.642784 | orchestrator | 2025-05-19 22:02:50 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state STARTED 2025-05-19 22:02:50.642871 | orchestrator | 2025-05-19 22:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:53.690355 | orchestrator | 2025-05-19 22:02:53 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:02:53.690688 | orchestrator | 2025-05-19 22:02:53 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:53.698533 | orchestrator | 2025-05-19 22:02:53 | INFO  | Task ccbba3cc-905d-4f62-ad2e-8f8220a605e8 is in state SUCCESS 2025-05-19 22:02:53.699406 | orchestrator | 2025-05-19 22:02:53.699434 | orchestrator | None 2025-05-19 22:02:53.701389 | orchestrator | 2025-05-19 22:02:53.701425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:02:53.701444 | orchestrator | 2025-05-19 22:02:53.701463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:02:53.701481 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.330) 0:00:00.330 ************ 2025-05-19 22:02:53.701498 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.701516 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.701533 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.701549 | orchestrator | 2025-05-19 22:02:53.701568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:02:53.701587 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.310) 0:00:00.641 ************ 2025-05-19 22:02:53.701607 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-19 22:02:53.701618 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-19 22:02:53.701628 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-19 22:02:53.701638 | orchestrator | 2025-05-19 22:02:53.701648 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-19 22:02:53.701657 | orchestrator | 2025-05-19 22:02:53.701667 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 22:02:53.701677 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.421) 0:00:01.063 ************ 2025-05-19 22:02:53.701687 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.701696 | orchestrator | 2025-05-19 22:02:53.701706 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-19 22:02:53.701716 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:00.922) 0:00:01.986 ************ 2025-05-19 22:02:53.701725 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.701735 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.701745 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.701754 | orchestrator | 2025-05-19 22:02:53.701764 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 22:02:53.701773 | orchestrator | Monday 19 May 2025 21:56:18 +0000 (0:00:00.762) 0:00:02.748 ************ 2025-05-19 22:02:53.701783 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.701792 | orchestrator | 2025-05-19 22:02:53.701802 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-19 22:02:53.701811 | orchestrator | Monday 19 May 2025 21:56:19 +0000 (0:00:01.027) 0:00:03.776 ************ 2025-05-19 22:02:53.701821 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.701830 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.701840 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.701849 | orchestrator | 2025-05-19 22:02:53.701859 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-19 22:02:53.701868 | orchestrator | Monday 19 May 2025 21:56:20 +0000 (0:00:00.832) 0:00:04.609 ************ 2025-05-19 22:02:53.701878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701923 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701933 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701942 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701952 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-19 22:02:53.701980 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 22:02:53.701993 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 22:02:53.702283 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-19 22:02:53.702302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 22:02:53.702313 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 22:02:53.702324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-19 22:02:53.702335 | orchestrator | 2025-05-19 22:02:53.702346 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 22:02:53.702356 | orchestrator | Monday 19 May 2025 21:56:23 +0000 (0:00:03.434) 0:00:08.043 ************ 2025-05-19 22:02:53.702365 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 22:02:53.702375 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 22:02:53.702385 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 22:02:53.702394 | orchestrator | 2025-05-19 22:02:53.702404 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 22:02:53.702413 | orchestrator | Monday 19 May 2025 21:56:24 +0000 (0:00:01.069) 0:00:09.113 ************ 2025-05-19 22:02:53.702423 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-19 22:02:53.702433 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-19 22:02:53.702443 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-19 22:02:53.702452 | orchestrator | 2025-05-19 22:02:53.702462 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 22:02:53.702471 | orchestrator | Monday 19 May 2025 21:56:26 +0000 (0:00:02.005) 0:00:11.119 ************ 2025-05-19 22:02:53.702481 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-19 22:02:53.702491 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.702516 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-19 22:02:53.702526 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.702536 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-19 22:02:53.702546 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.702555 | orchestrator | 2025-05-19 22:02:53.702565 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-19 22:02:53.702574 | orchestrator | Monday 19 May 2025 21:56:28 +0000 (0:00:01.582) 0:00:12.702 ************ 2025-05-19 22:02:53.702587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.702662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.702691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.702701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.702719 | orchestrator | 2025-05-19 22:02:53.702729 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-19 22:02:53.702739 | orchestrator | Monday 19 May 2025 21:56:31 +0000 (0:00:03.077) 0:00:15.780 ************ 2025-05-19 22:02:53.702749 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.702759 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.702768 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.702778 | orchestrator | 2025-05-19 22:02:53.702788 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-19 22:02:53.702797 | orchestrator | Monday 19 May 2025 21:56:33 +0000 (0:00:02.144) 0:00:17.924 ************ 2025-05-19 22:02:53.702836 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-19 22:02:53.702847 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-19 22:02:53.702856 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-19 22:02:53.702866 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-19 22:02:53.702876 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-19 22:02:53.702885 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-19 22:02:53.702932 | orchestrator | 2025-05-19 22:02:53.702943 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-19 22:02:53.702952 | orchestrator | Monday 19 May 2025 21:56:36 +0000 (0:00:02.635) 0:00:20.560 ************ 2025-05-19 22:02:53.702962 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.702972 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.702981 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.702991 | orchestrator | 2025-05-19 22:02:53.703000 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-19 22:02:53.703010 | orchestrator | Monday 19 May 2025 21:56:39 +0000 (0:00:02.884) 0:00:23.445 ************ 2025-05-19 22:02:53.703019 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.703035 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.703045 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.703055 | orchestrator | 2025-05-19 22:02:53.703064 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-19 22:02:53.703074 | orchestrator | Monday 19 May 2025 21:56:40 +0000 (0:00:01.593) 0:00:25.039 ************ 2025-05-19 22:02:53.703084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.703103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.703114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703144 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.703154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.703164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.703179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703200 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.703217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.703234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.703245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703265 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.703275 | orchestrator | 2025-05-19 22:02:53.703284 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-19 22:02:53.703294 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:00.500) 0:00:25.539 ************ 2025-05-19 22:02:53.703309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.703665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb', '__omit_place_holder__09e749a8125c8dec6234b351cd3f56f8a2b80eeb'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-19 22:02:53.703675 | orchestrator | 2025-05-19 22:02:53.703684 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-19 22:02:53.703694 | orchestrator | Monday 19 May 2025 21:56:45 +0000 (0:00:04.554) 0:00:30.093 ************ 2025-05-19 22:02:53.703704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.703785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.703795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.703809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.703819 | orchestrator | 2025-05-19 22:02:53.703829 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-19 22:02:53.703838 | orchestrator | Monday 19 May 2025 21:56:49 +0000 (0:00:03.707) 0:00:33.801 ************ 2025-05-19 22:02:53.703854 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 22:02:53.703864 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 22:02:53.703874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-19 22:02:53.703883 | orchestrator | 2025-05-19 22:02:53.703912 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-19 22:02:53.703922 | orchestrator | Monday 19 May 2025 21:56:51 +0000 (0:00:01.649) 0:00:35.450 ************ 2025-05-19 22:02:53.703932 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 22:02:53.703942 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 22:02:53.703951 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-19 22:02:53.703961 | orchestrator | 2025-05-19 22:02:53.706263 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-19 22:02:53.706348 | orchestrator | Monday 19 May 2025 21:56:55 +0000 (0:00:04.484) 0:00:39.935 ************ 2025-05-19 22:02:53.706363 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.706375 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.706386 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.706398 | orchestrator | 2025-05-19 22:02:53.706409 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-19 22:02:53.706420 | orchestrator | Monday 19 May 2025 21:56:57 +0000 (0:00:02.147) 0:00:42.082 ************ 2025-05-19 22:02:53.706431 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 22:02:53.706445 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 22:02:53.706456 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-19 22:02:53.706467 | orchestrator | 2025-05-19 22:02:53.706478 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-19 22:02:53.706489 | orchestrator | Monday 19 May 2025 21:57:02 +0000 (0:00:04.350) 0:00:46.433 ************ 2025-05-19 22:02:53.706500 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 22:02:53.706511 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 22:02:53.706522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-19 22:02:53.706533 | orchestrator | 2025-05-19 22:02:53.706544 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-19 22:02:53.706555 | orchestrator | Monday 19 May 2025 21:57:04 +0000 (0:00:01.991) 0:00:48.424 ************ 2025-05-19 22:02:53.706566 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-19 22:02:53.706577 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-19 22:02:53.706588 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-19 22:02:53.706599 | orchestrator | 2025-05-19 22:02:53.706610 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-19 22:02:53.706621 | orchestrator | Monday 19 May 2025 21:57:06 +0000 (0:00:02.052) 0:00:50.476 ************ 2025-05-19 22:02:53.706632 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-19 22:02:53.706643 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-19 22:02:53.706654 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-19 22:02:53.706665 | orchestrator | 2025-05-19 22:02:53.706676 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-19 22:02:53.706718 | orchestrator | Monday 19 May 2025 21:57:08 +0000 (0:00:02.365) 0:00:52.842 ************ 2025-05-19 22:02:53.706731 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.706742 | orchestrator | 2025-05-19 22:02:53.706753 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-19 22:02:53.706763 | orchestrator | Monday 19 May 2025 21:57:10 +0000 (0:00:01.640) 0:00:54.483 ************ 2025-05-19 22:02:53.706792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.706931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.706951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.706963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.706974 | orchestrator | 2025-05-19 22:02:53.706986 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-19 22:02:53.706998 | orchestrator | Monday 19 May 2025 21:57:14 +0000 (0:00:03.721) 0:00:58.204 ************ 2025-05-19 22:02:53.707023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707066 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.707078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707118 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.707129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707172 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.707183 | orchestrator | 2025-05-19 22:02:53.707195 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-19 22:02:53.707213 | orchestrator | Monday 19 May 2025 21:57:14 +0000 (0:00:00.883) 0:00:59.088 ************ 2025-05-19 22:02:53.707225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707265 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.707276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707318 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.707330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707370 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.707381 | orchestrator | 2025-05-19 22:02:53.707392 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-19 22:02:53.707407 | orchestrator | Monday 19 May 2025 21:57:16 +0000 (0:00:01.474) 0:01:00.563 ************ 2025-05-19 22:02:53.707419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707462 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.707473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707513 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.707525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707566 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.707577 | orchestrator | 2025-05-19 22:02:53.707588 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-19 22:02:53.707599 | orchestrator | Monday 19 May 2025 21:57:17 +0000 (0:00:00.793) 0:01:01.356 ************ 2025-05-19 22:02:53.707616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707733 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.707745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707784 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.707805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707848 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.707859 | orchestrator | 2025-05-19 22:02:53.707870 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-19 22:02:53.707882 | orchestrator | Monday 19 May 2025 21:57:17 +0000 (0:00:00.598) 0:01:01.954 ************ 2025-05-19 22:02:53.707914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.707943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.707954 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.707972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.707991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708014 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.708025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708065 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.708076 | orchestrator | 2025-05-19 22:02:53.708088 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-19 22:02:53.708099 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:01.616) 0:01:03.571 ************ 2025-05-19 22:02:53.708110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708159 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.708170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708210 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.708221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708269 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.708280 | orchestrator | 2025-05-19 22:02:53.708291 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-19 22:02:53.708303 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.939) 0:01:04.511 ************ 2025-05-19 22:02:53.708314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708348 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.708364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708412 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.708424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708458 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.708469 | orchestrator | 2025-05-19 22:02:53.708480 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-19 22:02:53.708491 | orchestrator | Monday 19 May 2025 21:57:21 +0000 (0:00:01.486) 0:01:05.997 ************ 2025-05-19 22:02:53.708507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708548 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.708566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708601 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.708612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-19 22:02:53.708639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-19 22:02:53.708651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-19 22:02:53.708662 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.708673 | orchestrator | 2025-05-19 22:02:53.708684 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-19 22:02:53.708695 | orchestrator | Monday 19 May 2025 21:57:23 +0000 (0:00:01.896) 0:01:07.894 ************ 2025-05-19 22:02:53.708707 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 22:02:53.708718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 22:02:53.708736 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-19 22:02:53.708747 | orchestrator | 2025-05-19 22:02:53.708758 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-19 22:02:53.708769 | orchestrator | Monday 19 May 2025 21:57:25 +0000 (0:00:01.462) 0:01:09.356 ************ 2025-05-19 22:02:53.708780 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 22:02:53.708791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 22:02:53.708802 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-19 22:02:53.708814 | orchestrator | 2025-05-19 22:02:53.708825 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-19 22:02:53.708836 | orchestrator | Monday 19 May 2025 21:57:26 +0000 (0:00:01.453) 0:01:10.809 ************ 2025-05-19 22:02:53.708847 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:02:53.708858 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:02:53.708869 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:02:53.708880 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.708908 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:02:53.708920 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:02:53.708931 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.708943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:02:53.708953 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.708964 | orchestrator | 2025-05-19 22:02:53.708976 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-19 22:02:53.708994 | orchestrator | Monday 19 May 2025 21:57:27 +0000 (0:00:01.202) 0:01:12.012 ************ 2025-05-19 22:02:53.709006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-19 22:02:53.709095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.709107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.709123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-19 22:02:53.709135 | orchestrator | 2025-05-19 22:02:53.709146 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-19 22:02:53.709158 | orchestrator | Monday 19 May 2025 21:57:30 +0000 (0:00:02.697) 0:01:14.709 ************ 2025-05-19 22:02:53.709169 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.709180 | orchestrator | 2025-05-19 22:02:53.709191 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-19 22:02:53.709202 | orchestrator | Monday 19 May 2025 21:57:31 +0000 (0:00:00.951) 0:01:15.661 ************ 2025-05-19 22:02:53.709214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 22:02:53.709234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 22:02:53.709293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-19 22:02:53.709352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709387 | orchestrator | 2025-05-19 22:02:53.709403 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-19 22:02:53.709414 | orchestrator | Monday 19 May 2025 21:57:35 +0000 (0:00:03.934) 0:01:19.596 ************ 2025-05-19 22:02:53.709426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 22:02:53.709444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 22:02:53.709475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709538 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.709549 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.709567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-19 22:02:53.709587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.709598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.709621 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.709633 | orchestrator | 2025-05-19 22:02:53.709644 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-19 22:02:53.709655 | orchestrator | Monday 19 May 2025 21:57:36 +0000 (0:00:00.925) 0:01:20.521 ************ 2025-05-19 22:02:53.709671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.709707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709729 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.709741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-19 22:02:53.709771 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.709782 | orchestrator | 2025-05-19 22:02:53.709799 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-19 22:02:53.709810 | orchestrator | Monday 19 May 2025 21:57:37 +0000 (0:00:01.048) 0:01:21.570 ************ 2025-05-19 22:02:53.709822 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.709833 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.709844 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.709854 | orchestrator | 2025-05-19 22:02:53.709865 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-19 22:02:53.709877 | orchestrator | Monday 19 May 2025 21:57:38 +0000 (0:00:01.269) 0:01:22.839 ************ 2025-05-19 22:02:53.709887 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.709918 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.709929 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.709940 | orchestrator | 2025-05-19 22:02:53.709951 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-19 22:02:53.709962 | orchestrator | Monday 19 May 2025 21:57:40 +0000 (0:00:02.208) 0:01:25.048 ************ 2025-05-19 22:02:53.709973 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.709984 | orchestrator | 2025-05-19 22:02:53.709995 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-19 22:02:53.710006 | orchestrator | Monday 19 May 2025 21:57:41 +0000 (0:00:00.732) 0:01:25.780 ************ 2025-05-19 22:02:53.710057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.710071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.710089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.710133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710202 | orchestrator | 2025-05-19 22:02:53.710214 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-19 22:02:53.710225 | orchestrator | Monday 19 May 2025 21:57:45 +0000 (0:00:04.250) 0:01:30.030 ************ 2025-05-19 22:02:53.710244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.710256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710278 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.710290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.710306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710336 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.710354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.710367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.710389 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.710400 | orchestrator | 2025-05-19 22:02:53.710412 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-19 22:02:53.710423 | orchestrator | Monday 19 May 2025 21:57:46 +0000 (0:00:00.599) 0:01:30.630 ************ 2025-05-19 22:02:53.710434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710465 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.710476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710515 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.710526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-19 22:02:53.710538 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.710549 | orchestrator | 2025-05-19 22:02:53.710560 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-19 22:02:53.710571 | orchestrator | Monday 19 May 2025 21:57:47 +0000 (0:00:00.871) 0:01:31.501 ************ 2025-05-19 22:02:53.710582 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.710593 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.710603 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.710614 | orchestrator | 2025-05-19 22:02:53.710625 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-19 22:02:53.710636 | orchestrator | Monday 19 May 2025 21:57:50 +0000 (0:00:02.828) 0:01:34.330 ************ 2025-05-19 22:02:53.710647 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.710657 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.710668 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.710679 | orchestrator | 2025-05-19 22:02:53.710708 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-19 22:02:53.710720 | orchestrator | Monday 19 May 2025 21:57:52 +0000 (0:00:02.309) 0:01:36.639 ************ 2025-05-19 22:02:53.710730 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.710741 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.710752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.710763 | orchestrator | 2025-05-19 22:02:53.710774 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-19 22:02:53.710785 | orchestrator | Monday 19 May 2025 21:57:52 +0000 (0:00:00.347) 0:01:36.986 ************ 2025-05-19 22:02:53.710796 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.710806 | orchestrator | 2025-05-19 22:02:53.710817 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-19 22:02:53.710828 | orchestrator | Monday 19 May 2025 21:57:53 +0000 (0:00:00.793) 0:01:37.780 ************ 2025-05-19 22:02:53.710839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 22:02:53.710851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 22:02:53.710875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-19 22:02:53.710887 | orchestrator | 2025-05-19 22:02:53.710917 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-19 22:02:53.710929 | orchestrator | Monday 19 May 2025 21:57:57 +0000 (0:00:03.770) 0:01:41.550 ************ 2025-05-19 22:02:53.710946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 22:02:53.710959 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.710970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 22:02:53.710982 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.710993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-19 22:02:53.711012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.711022 | orchestrator | 2025-05-19 22:02:53.711033 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-19 22:02:53.711044 | orchestrator | Monday 19 May 2025 21:57:59 +0000 (0:00:01.712) 0:01:43.262 ************ 2025-05-19 22:02:53.711057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711087 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.711098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711122 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.711138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-19 22:02:53.711161 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.711172 | orchestrator | 2025-05-19 22:02:53.711183 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-19 22:02:53.711194 | orchestrator | Monday 19 May 2025 21:58:02 +0000 (0:00:03.366) 0:01:46.629 ************ 2025-05-19 22:02:53.711205 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.711222 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.711233 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.711244 | orchestrator | 2025-05-19 22:02:53.711255 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-19 22:02:53.711266 | orchestrator | Monday 19 May 2025 21:58:03 +0000 (0:00:01.120) 0:01:47.750 ************ 2025-05-19 22:02:53.711277 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.711287 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.711298 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.711309 | orchestrator | 2025-05-19 22:02:53.711320 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-19 22:02:53.711330 | orchestrator | Monday 19 May 2025 21:58:04 +0000 (0:00:01.211) 0:01:48.961 ************ 2025-05-19 22:02:53.711341 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.711352 | orchestrator | 2025-05-19 22:02:53.711363 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-19 22:02:53.711374 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:01.277) 0:01:50.238 ************ 2025-05-19 22:02:53.711385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.711441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.711510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.711574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711609 | orchestrator | 2025-05-19 22:02:53.711620 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-19 22:02:53.711631 | orchestrator | Monday 19 May 2025 21:58:13 +0000 (0:00:07.691) 0:01:57.930 ************ 2025-05-19 22:02:53.711647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.711659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.711718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.711734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711781 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.711793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.711804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.711843 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.711861 | orchestrator | 2025-05-19 22:02:53.711872 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-19 22:02:53.711883 | orchestrator | Monday 19 May 2025 21:58:15 +0000 (0:00:01.565) 0:01:59.495 ************ 2025-05-19 22:02:53.711925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.711944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.711956 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.711967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.711978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.711990 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.712001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.712012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-19 22:02:53.712024 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.712035 | orchestrator | 2025-05-19 22:02:53.712046 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-19 22:02:53.712056 | orchestrator | Monday 19 May 2025 21:58:17 +0000 (0:00:01.831) 0:02:01.327 ************ 2025-05-19 22:02:53.712067 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.712078 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.712089 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.712100 | orchestrator | 2025-05-19 22:02:53.712111 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-19 22:02:53.712122 | orchestrator | Monday 19 May 2025 21:58:19 +0000 (0:00:01.930) 0:02:03.257 ************ 2025-05-19 22:02:53.712133 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.712144 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.712155 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.712165 | orchestrator | 2025-05-19 22:02:53.712176 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-19 22:02:53.712188 | orchestrator | Monday 19 May 2025 21:58:20 +0000 (0:00:01.820) 0:02:05.078 ************ 2025-05-19 22:02:53.712199 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.712210 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.712221 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.712231 | orchestrator | 2025-05-19 22:02:53.712242 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-19 22:02:53.712254 | orchestrator | Monday 19 May 2025 21:58:21 +0000 (0:00:00.401) 0:02:05.479 ************ 2025-05-19 22:02:53.712264 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.712275 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.712286 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.712297 | orchestrator | 2025-05-19 22:02:53.712308 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-19 22:02:53.712319 | orchestrator | Monday 19 May 2025 21:58:21 +0000 (0:00:00.248) 0:02:05.728 ************ 2025-05-19 22:02:53.712329 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.712351 | orchestrator | 2025-05-19 22:02:53.712362 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-19 22:02:53.712373 | orchestrator | Monday 19 May 2025 21:58:22 +0000 (0:00:00.711) 0:02:06.440 ************ 2025-05-19 22:02:53.712389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:02:53.712407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:02:53.712507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:02:53.712542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712691 | orchestrator | 2025-05-19 22:02:53.712703 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-19 22:02:53.712714 | orchestrator | Monday 19 May 2025 21:58:26 +0000 (0:00:04.214) 0:02:10.655 ************ 2025-05-19 22:02:53.712732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:02:53.712744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712830 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.712842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:02:53.712853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:02:53.712945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:02:53.712974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.712997 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.713013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.713025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.713042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.713054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.713072 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.713083 | orchestrator | 2025-05-19 22:02:53.713094 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-19 22:02:53.713105 | orchestrator | Monday 19 May 2025 21:58:27 +0000 (0:00:00.715) 0:02:11.370 ************ 2025-05-19 22:02:53.713116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713139 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.713150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713172 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.713182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-19 22:02:53.713204 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.713215 | orchestrator | 2025-05-19 22:02:53.713226 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-19 22:02:53.713237 | orchestrator | Monday 19 May 2025 21:58:28 +0000 (0:00:00.800) 0:02:12.170 ************ 2025-05-19 22:02:53.713248 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.713259 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.713274 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.713285 | orchestrator | 2025-05-19 22:02:53.713296 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-19 22:02:53.713307 | orchestrator | Monday 19 May 2025 21:58:29 +0000 (0:00:01.451) 0:02:13.622 ************ 2025-05-19 22:02:53.713318 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.713329 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.713340 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.713351 | orchestrator | 2025-05-19 22:02:53.713361 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-19 22:02:53.713372 | orchestrator | Monday 19 May 2025 21:58:31 +0000 (0:00:01.911) 0:02:15.533 ************ 2025-05-19 22:02:53.713383 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.713394 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.713404 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.713415 | orchestrator | 2025-05-19 22:02:53.713426 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-19 22:02:53.713437 | orchestrator | Monday 19 May 2025 21:58:31 +0000 (0:00:00.291) 0:02:15.825 ************ 2025-05-19 22:02:53.713448 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.713458 | orchestrator | 2025-05-19 22:02:53.713469 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-19 22:02:53.713480 | orchestrator | Monday 19 May 2025 21:58:32 +0000 (0:00:00.800) 0:02:16.625 ************ 2025-05-19 22:02:53.713502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:02:53.713527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.713759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:02:53.713793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.713827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:02:53.713850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.713862 | orchestrator | 2025-05-19 22:02:53.713873 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-19 22:02:53.713885 | orchestrator | Monday 19 May 2025 21:58:36 +0000 (0:00:04.327) 0:02:20.953 ************ 2025-05-19 22:02:53.713968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:02:53.713991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.714004 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.714052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:02:53.714084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.714095 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.714110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:02:53.714128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.714145 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.714155 | orchestrator | 2025-05-19 22:02:53.714165 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-19 22:02:53.714175 | orchestrator | Monday 19 May 2025 21:58:39 +0000 (0:00:02.871) 0:02:23.824 ************ 2025-05-19 22:02:53.714185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714206 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.714221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714247 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.714257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-19 22:02:53.714283 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.714293 | orchestrator | 2025-05-19 22:02:53.714303 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-19 22:02:53.714313 | orchestrator | Monday 19 May 2025 21:58:43 +0000 (0:00:03.454) 0:02:27.279 ************ 2025-05-19 22:02:53.714325 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.714336 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.714347 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.714358 | orchestrator | 2025-05-19 22:02:53.714370 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-19 22:02:53.714381 | orchestrator | Monday 19 May 2025 21:58:44 +0000 (0:00:01.501) 0:02:28.781 ************ 2025-05-19 22:02:53.714392 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.714403 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.714414 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.714425 | orchestrator | 2025-05-19 22:02:53.714436 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-19 22:02:53.714447 | orchestrator | Monday 19 May 2025 21:58:46 +0000 (0:00:02.086) 0:02:30.868 ************ 2025-05-19 22:02:53.714458 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.714470 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.714482 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.714493 | orchestrator | 2025-05-19 22:02:53.714504 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-19 22:02:53.714515 | orchestrator | Monday 19 May 2025 21:58:47 +0000 (0:00:00.349) 0:02:31.217 ************ 2025-05-19 22:02:53.714526 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.714537 | orchestrator | 2025-05-19 22:02:53.714550 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-19 22:02:53.714561 | orchestrator | Monday 19 May 2025 21:58:47 +0000 (0:00:00.852) 0:02:32.070 ************ 2025-05-19 22:02:53.714573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:02:53.714590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:02:53.714608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:02:53.714619 | orchestrator | 2025-05-19 22:02:53.714630 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-19 22:02:53.714641 | orchestrator | Monday 19 May 2025 21:58:51 +0000 (0:00:03.286) 0:02:35.357 ************ 2025-05-19 22:02:53.714659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:02:53.714671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.714681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:02:53.714691 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.714701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:02:53.714712 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.714722 | orchestrator | 2025-05-19 22:02:53.714731 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-19 22:02:53.714741 | orchestrator | Monday 19 May 2025 21:58:51 +0000 (0:00:00.409) 0:02:35.766 ************ 2025-05-19 22:02:53.714758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714780 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.714790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714814 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.714823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-19 22:02:53.714843 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.714853 | orchestrator | 2025-05-19 22:02:53.714863 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-19 22:02:53.714873 | orchestrator | Monday 19 May 2025 21:58:52 +0000 (0:00:00.654) 0:02:36.421 ************ 2025-05-19 22:02:53.714882 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.714908 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.714919 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.714929 | orchestrator | 2025-05-19 22:02:53.714938 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-19 22:02:53.714948 | orchestrator | Monday 19 May 2025 21:58:53 +0000 (0:00:01.662) 0:02:38.083 ************ 2025-05-19 22:02:53.714958 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.714968 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.714977 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.714987 | orchestrator | 2025-05-19 22:02:53.714997 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-19 22:02:53.715007 | orchestrator | Monday 19 May 2025 21:58:55 +0000 (0:00:01.985) 0:02:40.069 ************ 2025-05-19 22:02:53.715016 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.715026 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.715040 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.715050 | orchestrator | 2025-05-19 22:02:53.715060 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-19 22:02:53.715070 | orchestrator | Monday 19 May 2025 21:58:56 +0000 (0:00:00.311) 0:02:40.381 ************ 2025-05-19 22:02:53.715079 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.715089 | orchestrator | 2025-05-19 22:02:53.715099 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-19 22:02:53.715108 | orchestrator | Monday 19 May 2025 21:58:57 +0000 (0:00:00.915) 0:02:41.296 ************ 2025-05-19 22:02:53.715120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:02:53.715144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:02:53.715182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:02:53.715200 | orchestrator | 2025-05-19 22:02:53.715210 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-19 22:02:53.715219 | orchestrator | Monday 19 May 2025 21:59:01 +0000 (0:00:04.415) 0:02:45.712 ************ 2025-05-19 22:02:53.715237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:02:53.715254 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.715270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:02:53.715282 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.715299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:02:53.715316 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.715327 | orchestrator | 2025-05-19 22:02:53.715336 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-19 22:02:53.715346 | orchestrator | Monday 19 May 2025 21:59:02 +0000 (0:00:00.801) 0:02:46.513 ************ 2025-05-19 22:02:53.715356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 22:02:53.715414 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.715424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-19 22:02:53.715496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-19 22:02:53.715516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 22:02:53.715526 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.715536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-19 22:02:53.715546 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.715555 | orchestrator | 2025-05-19 22:02:53.715565 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-19 22:02:53.715575 | orchestrator | Monday 19 May 2025 21:59:03 +0000 (0:00:01.184) 0:02:47.697 ************ 2025-05-19 22:02:53.715585 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.715595 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.715604 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.715614 | orchestrator | 2025-05-19 22:02:53.715624 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-19 22:02:53.715634 | orchestrator | Monday 19 May 2025 21:59:05 +0000 (0:00:01.671) 0:02:49.369 ************ 2025-05-19 22:02:53.715643 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.715653 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.715662 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.715672 | orchestrator | 2025-05-19 22:02:53.715682 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-19 22:02:53.715692 | orchestrator | Monday 19 May 2025 21:59:07 +0000 (0:00:02.168) 0:02:51.538 ************ 2025-05-19 22:02:53.715706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.715716 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.715725 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.715735 | orchestrator | 2025-05-19 22:02:53.715745 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-19 22:02:53.715755 | orchestrator | Monday 19 May 2025 21:59:07 +0000 (0:00:00.328) 0:02:51.866 ************ 2025-05-19 22:02:53.715765 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.715774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.715784 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.715794 | orchestrator | 2025-05-19 22:02:53.715803 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-19 22:02:53.715813 | orchestrator | Monday 19 May 2025 21:59:08 +0000 (0:00:00.331) 0:02:52.197 ************ 2025-05-19 22:02:53.715829 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.715839 | orchestrator | 2025-05-19 22:02:53.715848 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-19 22:02:53.715858 | orchestrator | Monday 19 May 2025 21:59:09 +0000 (0:00:01.320) 0:02:53.518 ************ 2025-05-19 22:02:53.715874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:02:53.715887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:02:53.715948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.715960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.715975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.715992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.716010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:02:53.716021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.716032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.716042 | orchestrator | 2025-05-19 22:02:53.716052 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-19 22:02:53.716062 | orchestrator | Monday 19 May 2025 21:59:13 +0000 (0:00:03.855) 0:02:57.374 ************ 2025-05-19 22:02:53.716077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:02:53.716094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.716110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.716121 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.716131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:02:53.716142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.716152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.716169 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.716187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:02:53.716203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:02:53.716214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:02:53.716224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.716234 | orchestrator | 2025-05-19 22:02:53.716244 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-19 22:02:53.716254 | orchestrator | Monday 19 May 2025 21:59:13 +0000 (0:00:00.554) 0:02:57.928 ************ 2025-05-19 22:02:53.716264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716286 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.716296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716316 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.716335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-19 22:02:53.716360 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.716370 | orchestrator | 2025-05-19 22:02:53.716380 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-19 22:02:53.716390 | orchestrator | Monday 19 May 2025 21:59:14 +0000 (0:00:01.132) 0:02:59.061 ************ 2025-05-19 22:02:53.716400 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.716409 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.716419 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.716428 | orchestrator | 2025-05-19 22:02:53.716438 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-19 22:02:53.716448 | orchestrator | Monday 19 May 2025 21:59:16 +0000 (0:00:01.318) 0:03:00.379 ************ 2025-05-19 22:02:53.716458 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.716465 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.716473 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.716481 | orchestrator | 2025-05-19 22:02:53.716489 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-19 22:02:53.716497 | orchestrator | Monday 19 May 2025 21:59:18 +0000 (0:00:02.155) 0:03:02.534 ************ 2025-05-19 22:02:53.716505 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.716513 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.716521 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.716529 | orchestrator | 2025-05-19 22:02:53.716536 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-19 22:02:53.716545 | orchestrator | Monday 19 May 2025 21:59:18 +0000 (0:00:00.351) 0:03:02.886 ************ 2025-05-19 22:02:53.716553 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.716561 | orchestrator | 2025-05-19 22:02:53.716568 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-19 22:02:53.716577 | orchestrator | Monday 19 May 2025 21:59:20 +0000 (0:00:01.319) 0:03:04.205 ************ 2025-05-19 22:02:53.716724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:02:53.716739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.716758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:02:53.716771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.716780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:02:53.716842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.716854 | orchestrator | 2025-05-19 22:02:53.716862 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-19 22:02:53.716871 | orchestrator | Monday 19 May 2025 21:59:23 +0000 (0:00:03.396) 0:03:07.602 ************ 2025-05-19 22:02:53.716879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:02:53.716911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.716920 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.716933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:02:53.716995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717007 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.717015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:02:53.717031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717039 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.717047 | orchestrator | 2025-05-19 22:02:53.717055 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-19 22:02:53.717063 | orchestrator | Monday 19 May 2025 21:59:24 +0000 (0:00:00.681) 0:03:08.284 ************ 2025-05-19 22:02:53.717072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717089 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.717097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717118 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.717126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-19 22:02:53.717142 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.717150 | orchestrator | 2025-05-19 22:02:53.717158 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-19 22:02:53.717166 | orchestrator | Monday 19 May 2025 21:59:25 +0000 (0:00:01.490) 0:03:09.774 ************ 2025-05-19 22:02:53.717174 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.717182 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.717190 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.717197 | orchestrator | 2025-05-19 22:02:53.717205 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-19 22:02:53.717213 | orchestrator | Monday 19 May 2025 21:59:26 +0000 (0:00:01.324) 0:03:11.098 ************ 2025-05-19 22:02:53.717221 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.717229 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.717237 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.717245 | orchestrator | 2025-05-19 22:02:53.717252 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-19 22:02:53.717260 | orchestrator | Monday 19 May 2025 21:59:29 +0000 (0:00:02.088) 0:03:13.187 ************ 2025-05-19 22:02:53.717318 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.717329 | orchestrator | 2025-05-19 22:02:53.717338 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-19 22:02:53.717352 | orchestrator | Monday 19 May 2025 21:59:30 +0000 (0:00:01.423) 0:03:14.611 ************ 2025-05-19 22:02:53.717360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 22:02:53.717369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 22:02:53.717457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-19 22:02:53.717505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717587 | orchestrator | 2025-05-19 22:02:53.717596 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-19 22:02:53.717604 | orchestrator | Monday 19 May 2025 21:59:34 +0000 (0:00:04.203) 0:03:18.815 ************ 2025-05-19 22:02:53.717613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 22:02:53.717621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717650 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.717659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 22:02:53.717724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717753 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.717762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-19 22:02:53.717774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.717860 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.717868 | orchestrator | 2025-05-19 22:02:53.717877 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-19 22:02:53.717885 | orchestrator | Monday 19 May 2025 21:59:35 +0000 (0:00:01.022) 0:03:19.837 ************ 2025-05-19 22:02:53.717910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717927 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.717935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717960 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.717968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-19 22:02:53.717976 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.717984 | orchestrator | 2025-05-19 22:02:53.717992 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-19 22:02:53.718000 | orchestrator | Monday 19 May 2025 21:59:36 +0000 (0:00:00.967) 0:03:20.805 ************ 2025-05-19 22:02:53.718008 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.718041 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.718049 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.718057 | orchestrator | 2025-05-19 22:02:53.718065 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-19 22:02:53.718073 | orchestrator | Monday 19 May 2025 21:59:39 +0000 (0:00:02.506) 0:03:23.311 ************ 2025-05-19 22:02:53.718081 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.718089 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.718097 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.718105 | orchestrator | 2025-05-19 22:02:53.718113 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-19 22:02:53.718121 | orchestrator | Monday 19 May 2025 21:59:41 +0000 (0:00:02.281) 0:03:25.593 ************ 2025-05-19 22:02:53.718129 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.718145 | orchestrator | 2025-05-19 22:02:53.718153 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-19 22:02:53.718161 | orchestrator | Monday 19 May 2025 21:59:42 +0000 (0:00:01.068) 0:03:26.661 ************ 2025-05-19 22:02:53.718169 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:02:53.718177 | orchestrator | 2025-05-19 22:02:53.718185 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-19 22:02:53.718197 | orchestrator | Monday 19 May 2025 21:59:45 +0000 (0:00:03.280) 0:03:29.942 ************ 2025-05-19 22:02:53.718263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718286 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.718299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718323 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.718384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718411 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.718419 | orchestrator | 2025-05-19 22:02:53.718427 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-19 22:02:53.718435 | orchestrator | Monday 19 May 2025 21:59:48 +0000 (0:00:02.895) 0:03:32.838 ************ 2025-05-19 22:02:53.718449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718519 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.718528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718559 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.718618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:02:53.718630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-19 22:02:53.718639 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.718647 | orchestrator | 2025-05-19 22:02:53.718655 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-19 22:02:53.718663 | orchestrator | Monday 19 May 2025 21:59:50 +0000 (0:00:02.267) 0:03:35.105 ************ 2025-05-19 22:02:53.718678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718696 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.718709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718726 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.718784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-19 22:02:53.718804 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.718813 | orchestrator | 2025-05-19 22:02:53.718821 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-19 22:02:53.718835 | orchestrator | Monday 19 May 2025 21:59:53 +0000 (0:00:02.631) 0:03:37.737 ************ 2025-05-19 22:02:53.718843 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.718851 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.718859 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.718867 | orchestrator | 2025-05-19 22:02:53.718875 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-19 22:02:53.718883 | orchestrator | Monday 19 May 2025 21:59:55 +0000 (0:00:02.065) 0:03:39.802 ************ 2025-05-19 22:02:53.718934 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.718944 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.718952 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.718960 | orchestrator | 2025-05-19 22:02:53.718968 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-19 22:02:53.718976 | orchestrator | Monday 19 May 2025 21:59:57 +0000 (0:00:01.482) 0:03:41.284 ************ 2025-05-19 22:02:53.718984 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.718992 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719000 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719008 | orchestrator | 2025-05-19 22:02:53.719016 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-19 22:02:53.719024 | orchestrator | Monday 19 May 2025 21:59:57 +0000 (0:00:00.327) 0:03:41.612 ************ 2025-05-19 22:02:53.719032 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.719051 | orchestrator | 2025-05-19 22:02:53.719059 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-19 22:02:53.719067 | orchestrator | Monday 19 May 2025 21:59:58 +0000 (0:00:01.151) 0:03:42.764 ************ 2025-05-19 22:02:53.719081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 22:02:53.719090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 22:02:53.719156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-19 22:02:53.719177 | orchestrator | 2025-05-19 22:02:53.719186 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-19 22:02:53.719194 | orchestrator | Monday 19 May 2025 22:00:00 +0000 (0:00:01.809) 0:03:44.573 ************ 2025-05-19 22:02:53.719202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 22:02:53.719211 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.719219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 22:02:53.719227 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-19 22:02:53.719249 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719257 | orchestrator | 2025-05-19 22:02:53.719265 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-19 22:02:53.719273 | orchestrator | Monday 19 May 2025 22:00:00 +0000 (0:00:00.424) 0:03:44.998 ************ 2025-05-19 22:02:53.719281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 22:02:53.719290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 22:02:53.719298 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.719307 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-19 22:02:53.719382 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719390 | orchestrator | 2025-05-19 22:02:53.719398 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-19 22:02:53.719406 | orchestrator | Monday 19 May 2025 22:00:01 +0000 (0:00:00.630) 0:03:45.628 ************ 2025-05-19 22:02:53.719414 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.719422 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719428 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719435 | orchestrator | 2025-05-19 22:02:53.719442 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-19 22:02:53.719449 | orchestrator | Monday 19 May 2025 22:00:02 +0000 (0:00:00.778) 0:03:46.406 ************ 2025-05-19 22:02:53.719455 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.719462 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719469 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719476 | orchestrator | 2025-05-19 22:02:53.719482 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-19 22:02:53.719489 | orchestrator | Monday 19 May 2025 22:00:03 +0000 (0:00:01.320) 0:03:47.727 ************ 2025-05-19 22:02:53.719496 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.719503 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.719510 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.719517 | orchestrator | 2025-05-19 22:02:53.719523 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-19 22:02:53.719530 | orchestrator | Monday 19 May 2025 22:00:03 +0000 (0:00:00.344) 0:03:48.071 ************ 2025-05-19 22:02:53.719537 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.719544 | orchestrator | 2025-05-19 22:02:53.719550 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-19 22:02:53.719557 | orchestrator | Monday 19 May 2025 22:00:05 +0000 (0:00:01.451) 0:03:49.523 ************ 2025-05-19 22:02:53.719564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:02:53.719575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.719654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.719744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:02:53.719751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.719876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.719889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.719957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.719974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.719989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:02:53.720080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.720180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.720213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.720399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720417 | orchestrator | 2025-05-19 22:02:53.720424 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-19 22:02:53.720431 | orchestrator | Monday 19 May 2025 22:00:09 +0000 (0:00:04.561) 0:03:54.084 ************ 2025-05-19 22:02:53.720481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:02:53.720491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:02:53.720570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.720581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.720702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:02:53.720820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.720888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.720913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.720954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.721005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-19 22:02:53.721016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.721047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.721153 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.721164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.721197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.721205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721223 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.721231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-19 22:02:53.721248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-19 22:02:53.721281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:02:53.721293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721300 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.721307 | orchestrator | 2025-05-19 22:02:53.721314 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-19 22:02:53.721321 | orchestrator | Monday 19 May 2025 22:00:11 +0000 (0:00:01.521) 0:03:55.605 ************ 2025-05-19 22:02:53.721328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721343 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.721350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.721370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-19 22:02:53.721387 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.721394 | orchestrator | 2025-05-19 22:02:53.721401 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-19 22:02:53.721408 | orchestrator | Monday 19 May 2025 22:00:13 +0000 (0:00:02.072) 0:03:57.678 ************ 2025-05-19 22:02:53.721415 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.721421 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.721428 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.721435 | orchestrator | 2025-05-19 22:02:53.721442 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-19 22:02:53.721448 | orchestrator | Monday 19 May 2025 22:00:14 +0000 (0:00:01.354) 0:03:59.032 ************ 2025-05-19 22:02:53.721455 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.721462 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.721468 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.721475 | orchestrator | 2025-05-19 22:02:53.721482 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-19 22:02:53.721493 | orchestrator | Monday 19 May 2025 22:00:16 +0000 (0:00:02.075) 0:04:01.108 ************ 2025-05-19 22:02:53.721500 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.721507 | orchestrator | 2025-05-19 22:02:53.721514 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-19 22:02:53.721520 | orchestrator | Monday 19 May 2025 22:00:18 +0000 (0:00:01.176) 0:04:02.285 ************ 2025-05-19 22:02:53.721545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.721554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.721562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.721568 | orchestrator | 2025-05-19 22:02:53.721575 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-19 22:02:53.721582 | orchestrator | Monday 19 May 2025 22:00:21 +0000 (0:00:03.432) 0:04:05.717 ************ 2025-05-19 22:02:53.721593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.721605 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.721629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.721637 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.721644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.721651 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.721658 | orchestrator | 2025-05-19 22:02:53.721664 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-19 22:02:53.721671 | orchestrator | Monday 19 May 2025 22:00:22 +0000 (0:00:00.512) 0:04:06.230 ************ 2025-05-19 22:02:53.721678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721692 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.721699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721712 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.721719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-19 22:02:53.721752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.721760 | orchestrator | 2025-05-19 22:02:53.721769 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-19 22:02:53.721777 | orchestrator | Monday 19 May 2025 22:00:22 +0000 (0:00:00.724) 0:04:06.955 ************ 2025-05-19 22:02:53.721785 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.721793 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.721801 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.721808 | orchestrator | 2025-05-19 22:02:53.721816 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-19 22:02:53.721824 | orchestrator | Monday 19 May 2025 22:00:24 +0000 (0:00:01.603) 0:04:08.558 ************ 2025-05-19 22:02:53.721831 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.721839 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.721846 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.721854 | orchestrator | 2025-05-19 22:02:53.721862 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-19 22:02:53.721869 | orchestrator | Monday 19 May 2025 22:00:26 +0000 (0:00:02.048) 0:04:10.607 ************ 2025-05-19 22:02:53.721877 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.721884 | orchestrator | 2025-05-19 22:02:53.721904 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-19 22:02:53.721913 | orchestrator | Monday 19 May 2025 22:00:27 +0000 (0:00:01.251) 0:04:11.859 ************ 2025-05-19 22:02:53.721943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.721954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.721983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.722030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.722059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722084 | orchestrator | 2025-05-19 22:02:53.722092 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-19 22:02:53.722101 | orchestrator | Monday 19 May 2025 22:00:32 +0000 (0:00:05.024) 0:04:16.884 ************ 2025-05-19 22:02:53.722131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.722140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722154 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.722164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.722177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722191 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.722217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.722225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.722245 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.722251 | orchestrator | 2025-05-19 22:02:53.722258 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-19 22:02:53.722265 | orchestrator | Monday 19 May 2025 22:00:33 +0000 (0:00:00.983) 0:04:17.867 ************ 2025-05-19 22:02:53.722272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722304 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.722311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722372 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.722379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-19 22:02:53.722398 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.722404 | orchestrator | 2025-05-19 22:02:53.722411 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-19 22:02:53.722418 | orchestrator | Monday 19 May 2025 22:00:34 +0000 (0:00:00.852) 0:04:18.720 ************ 2025-05-19 22:02:53.722425 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.722431 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.722438 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.722445 | orchestrator | 2025-05-19 22:02:53.722451 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-19 22:02:53.722458 | orchestrator | Monday 19 May 2025 22:00:36 +0000 (0:00:01.617) 0:04:20.337 ************ 2025-05-19 22:02:53.722465 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.722472 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.722478 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.722485 | orchestrator | 2025-05-19 22:02:53.722492 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-19 22:02:53.722499 | orchestrator | Monday 19 May 2025 22:00:38 +0000 (0:00:02.160) 0:04:22.498 ************ 2025-05-19 22:02:53.722505 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.722512 | orchestrator | 2025-05-19 22:02:53.722519 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-19 22:02:53.722525 | orchestrator | Monday 19 May 2025 22:00:39 +0000 (0:00:01.614) 0:04:24.113 ************ 2025-05-19 22:02:53.722532 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-19 22:02:53.722539 | orchestrator | 2025-05-19 22:02:53.722546 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-19 22:02:53.722552 | orchestrator | Monday 19 May 2025 22:00:41 +0000 (0:00:01.088) 0:04:25.201 ************ 2025-05-19 22:02:53.722563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 22:02:53.722570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 22:02:53.722578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-19 22:02:53.722585 | orchestrator | 2025-05-19 22:02:53.722609 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-19 22:02:53.722617 | orchestrator | Monday 19 May 2025 22:00:44 +0000 (0:00:03.920) 0:04:29.122 ************ 2025-05-19 22:02:53.722629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722636 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.722643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722650 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.722657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722664 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.722671 | orchestrator | 2025-05-19 22:02:53.722678 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-19 22:02:53.722685 | orchestrator | Monday 19 May 2025 22:00:46 +0000 (0:00:01.335) 0:04:30.458 ************ 2025-05-19 22:02:53.722692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.722713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722730 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.722737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-19 22:02:53.722751 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.722758 | orchestrator | 2025-05-19 22:02:53.722764 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 22:02:53.722775 | orchestrator | Monday 19 May 2025 22:00:48 +0000 (0:00:01.892) 0:04:32.350 ************ 2025-05-19 22:02:53.722782 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.722789 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.722795 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.722802 | orchestrator | 2025-05-19 22:02:53.722809 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 22:02:53.722815 | orchestrator | Monday 19 May 2025 22:00:50 +0000 (0:00:02.471) 0:04:34.821 ************ 2025-05-19 22:02:53.722822 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.722829 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.722836 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.722842 | orchestrator | 2025-05-19 22:02:53.722867 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-19 22:02:53.722875 | orchestrator | Monday 19 May 2025 22:00:53 +0000 (0:00:03.058) 0:04:37.879 ************ 2025-05-19 22:02:53.722882 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-19 22:02:53.722889 | orchestrator | 2025-05-19 22:02:53.722907 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-19 22:02:53.722914 | orchestrator | Monday 19 May 2025 22:00:54 +0000 (0:00:00.864) 0:04:38.744 ************ 2025-05-19 22:02:53.722921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722928 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.722935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722942 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.722949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722956 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.722963 | orchestrator | 2025-05-19 22:02:53.722970 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-19 22:02:53.722977 | orchestrator | Monday 19 May 2025 22:00:55 +0000 (0:00:01.350) 0:04:40.094 ************ 2025-05-19 22:02:53.722987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.722999 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.723013 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-19 22:02:53.723028 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723034 | orchestrator | 2025-05-19 22:02:53.723060 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-19 22:02:53.723068 | orchestrator | Monday 19 May 2025 22:00:57 +0000 (0:00:01.592) 0:04:41.686 ************ 2025-05-19 22:02:53.723075 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723081 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723088 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723094 | orchestrator | 2025-05-19 22:02:53.723101 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 22:02:53.723108 | orchestrator | Monday 19 May 2025 22:00:58 +0000 (0:00:01.294) 0:04:42.980 ************ 2025-05-19 22:02:53.723115 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.723121 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.723128 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.723135 | orchestrator | 2025-05-19 22:02:53.723141 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 22:02:53.723148 | orchestrator | Monday 19 May 2025 22:01:01 +0000 (0:00:02.430) 0:04:45.411 ************ 2025-05-19 22:02:53.723155 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.723162 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.723168 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.723175 | orchestrator | 2025-05-19 22:02:53.723181 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-19 22:02:53.723188 | orchestrator | Monday 19 May 2025 22:01:04 +0000 (0:00:03.189) 0:04:48.601 ************ 2025-05-19 22:02:53.723195 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-19 22:02:53.723202 | orchestrator | 2025-05-19 22:02:53.723208 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-19 22:02:53.723215 | orchestrator | Monday 19 May 2025 22:01:05 +0000 (0:00:01.096) 0:04:49.698 ************ 2025-05-19 22:02:53.723222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723229 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723248 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723267 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723274 | orchestrator | 2025-05-19 22:02:53.723281 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-19 22:02:53.723288 | orchestrator | Monday 19 May 2025 22:01:06 +0000 (0:00:01.018) 0:04:50.716 ************ 2025-05-19 22:02:53.723294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723301 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723334 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-19 22:02:53.723348 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723355 | orchestrator | 2025-05-19 22:02:53.723362 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-19 22:02:53.723368 | orchestrator | Monday 19 May 2025 22:01:07 +0000 (0:00:01.244) 0:04:51.960 ************ 2025-05-19 22:02:53.723375 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723382 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723389 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723395 | orchestrator | 2025-05-19 22:02:53.723402 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-19 22:02:53.723409 | orchestrator | Monday 19 May 2025 22:01:09 +0000 (0:00:01.738) 0:04:53.699 ************ 2025-05-19 22:02:53.723420 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.723427 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.723434 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.723440 | orchestrator | 2025-05-19 22:02:53.723447 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-19 22:02:53.723454 | orchestrator | Monday 19 May 2025 22:01:11 +0000 (0:00:02.298) 0:04:55.998 ************ 2025-05-19 22:02:53.723461 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.723467 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.723474 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.723481 | orchestrator | 2025-05-19 22:02:53.723487 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-19 22:02:53.723494 | orchestrator | Monday 19 May 2025 22:01:15 +0000 (0:00:03.160) 0:04:59.158 ************ 2025-05-19 22:02:53.723501 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.723508 | orchestrator | 2025-05-19 22:02:53.723514 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-19 22:02:53.723521 | orchestrator | Monday 19 May 2025 22:01:16 +0000 (0:00:01.331) 0:05:00.490 ************ 2025-05-19 22:02:53.723532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.723539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.723573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.723661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723689 | orchestrator | 2025-05-19 22:02:53.723696 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-19 22:02:53.723703 | orchestrator | Monday 19 May 2025 22:01:20 +0000 (0:00:03.940) 0:05:04.430 ************ 2025-05-19 22:02:53.723742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.723756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723788 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.723795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.723821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723855 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.723865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.723872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:02:53.723947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:02:53.723969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:02:53.723976 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.723983 | orchestrator | 2025-05-19 22:02:53.723990 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-19 22:02:53.723997 | orchestrator | Monday 19 May 2025 22:01:20 +0000 (0:00:00.696) 0:05:05.127 ************ 2025-05-19 22:02:53.724004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724018 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.724025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724038 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.724045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-19 22:02:53.724063 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.724069 | orchestrator | 2025-05-19 22:02:53.724076 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-19 22:02:53.724083 | orchestrator | Monday 19 May 2025 22:01:21 +0000 (0:00:00.885) 0:05:06.013 ************ 2025-05-19 22:02:53.724090 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.724096 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.724103 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.724109 | orchestrator | 2025-05-19 22:02:53.724116 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-19 22:02:53.724128 | orchestrator | Monday 19 May 2025 22:01:23 +0000 (0:00:01.788) 0:05:07.801 ************ 2025-05-19 22:02:53.724134 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.724141 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.724148 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.724155 | orchestrator | 2025-05-19 22:02:53.724161 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-19 22:02:53.724168 | orchestrator | Monday 19 May 2025 22:01:25 +0000 (0:00:02.161) 0:05:09.962 ************ 2025-05-19 22:02:53.724175 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.724181 | orchestrator | 2025-05-19 22:02:53.724188 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-19 22:02:53.724195 | orchestrator | Monday 19 May 2025 22:01:27 +0000 (0:00:01.316) 0:05:11.279 ************ 2025-05-19 22:02:53.724221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:02:53.724230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:02:53.724237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:02:53.724249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:02:53.724281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:02:53.724290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:02:53.724298 | orchestrator | 2025-05-19 22:02:53.724305 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-19 22:02:53.724312 | orchestrator | Monday 19 May 2025 22:01:32 +0000 (0:00:05.487) 0:05:16.766 ************ 2025-05-19 22:02:53.724319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:02:53.724330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:02:53.724342 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.724367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:02:53.724387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:02:53.724394 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.724401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:02:53.724411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:02:53.724422 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.724429 | orchestrator | 2025-05-19 22:02:53.724435 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-19 22:02:53.724441 | orchestrator | Monday 19 May 2025 22:01:33 +0000 (0:00:01.083) 0:05:17.849 ************ 2025-05-19 22:02:53.724448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 22:02:53.724455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724485 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.724492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 22:02:53.724499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724512 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.724518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-19 22:02:53.724524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-19 22:02:53.724537 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.724543 | orchestrator | 2025-05-19 22:02:53.724550 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-19 22:02:53.724556 | orchestrator | Monday 19 May 2025 22:01:34 +0000 (0:00:00.907) 0:05:18.757 ************ 2025-05-19 22:02:53.724562 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.724569 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.724575 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.724585 | orchestrator | 2025-05-19 22:02:53.724591 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-19 22:02:53.724598 | orchestrator | Monday 19 May 2025 22:01:35 +0000 (0:00:00.418) 0:05:19.176 ************ 2025-05-19 22:02:53.724604 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.724610 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.724617 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.724623 | orchestrator | 2025-05-19 22:02:53.724629 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-19 22:02:53.724635 | orchestrator | Monday 19 May 2025 22:01:36 +0000 (0:00:01.339) 0:05:20.515 ************ 2025-05-19 22:02:53.724641 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.724648 | orchestrator | 2025-05-19 22:02:53.724654 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-19 22:02:53.724660 | orchestrator | Monday 19 May 2025 22:01:38 +0000 (0:00:01.696) 0:05:22.211 ************ 2025-05-19 22:02:53.724670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:02:53.724677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.724710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:02:53.724745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.724755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:02:53.724786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.724812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:02:53.724846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.724853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:02:53.724886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:02:53.724910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.724923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.724929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.724961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.724975 | orchestrator | 2025-05-19 22:02:53.724987 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-19 22:02:53.724994 | orchestrator | Monday 19 May 2025 22:01:42 +0000 (0:00:04.161) 0:05:26.373 ************ 2025-05-19 22:02:53.725000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 22:02:53.725007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.725013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 22:02:53.725051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.725057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725079 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 22:02:53.725092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.725103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 22:02:53.725137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.725143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725171 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 22:02:53.725185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:02:53.725191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 22:02:53.725228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-19 22:02:53.725235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:02:53.725251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:02:53.725257 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725264 | orchestrator | 2025-05-19 22:02:53.725270 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-19 22:02:53.725276 | orchestrator | Monday 19 May 2025 22:01:43 +0000 (0:00:01.267) 0:05:27.640 ************ 2025-05-19 22:02:53.725283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725317 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725350 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-19 22:02:53.725369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-19 22:02:53.725382 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725391 | orchestrator | 2025-05-19 22:02:53.725397 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-19 22:02:53.725404 | orchestrator | Monday 19 May 2025 22:01:44 +0000 (0:00:00.981) 0:05:28.621 ************ 2025-05-19 22:02:53.725410 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725417 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725423 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725429 | orchestrator | 2025-05-19 22:02:53.725435 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-19 22:02:53.725441 | orchestrator | Monday 19 May 2025 22:01:44 +0000 (0:00:00.459) 0:05:29.080 ************ 2025-05-19 22:02:53.725447 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725454 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725467 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725473 | orchestrator | 2025-05-19 22:02:53.725479 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-19 22:02:53.725486 | orchestrator | Monday 19 May 2025 22:01:46 +0000 (0:00:01.774) 0:05:30.855 ************ 2025-05-19 22:02:53.725492 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.725498 | orchestrator | 2025-05-19 22:02:53.725504 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-19 22:02:53.725510 | orchestrator | Monday 19 May 2025 22:01:48 +0000 (0:00:01.752) 0:05:32.608 ************ 2025-05-19 22:02:53.725520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 22:02:53.725527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 22:02:53.725534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-19 22:02:53.725541 | orchestrator | 2025-05-19 22:02:53.725547 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-19 22:02:53.725554 | orchestrator | Monday 19 May 2025 22:01:51 +0000 (0:00:02.621) 0:05:35.230 ************ 2025-05-19 22:02:53.725563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 22:02:53.725576 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 22:02:53.725592 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-19 22:02:53.725606 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725612 | orchestrator | 2025-05-19 22:02:53.725618 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-19 22:02:53.725625 | orchestrator | Monday 19 May 2025 22:01:51 +0000 (0:00:00.387) 0:05:35.617 ************ 2025-05-19 22:02:53.725631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 22:02:53.725637 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 22:02:53.725650 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-19 22:02:53.725662 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725675 | orchestrator | 2025-05-19 22:02:53.725681 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-19 22:02:53.725688 | orchestrator | Monday 19 May 2025 22:01:52 +0000 (0:00:00.988) 0:05:36.605 ************ 2025-05-19 22:02:53.725694 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725700 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725706 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725712 | orchestrator | 2025-05-19 22:02:53.725718 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-19 22:02:53.725725 | orchestrator | Monday 19 May 2025 22:01:52 +0000 (0:00:00.412) 0:05:37.018 ************ 2025-05-19 22:02:53.725731 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725737 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725743 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725749 | orchestrator | 2025-05-19 22:02:53.725755 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-19 22:02:53.725761 | orchestrator | Monday 19 May 2025 22:01:54 +0000 (0:00:01.345) 0:05:38.363 ************ 2025-05-19 22:02:53.725768 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:02:53.725774 | orchestrator | 2025-05-19 22:02:53.725783 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-19 22:02:53.725790 | orchestrator | Monday 19 May 2025 22:01:56 +0000 (0:00:01.907) 0:05:40.271 ************ 2025-05-19 22:02:53.725796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-19 22:02:53.725852 | orchestrator | 2025-05-19 22:02:53.725859 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-19 22:02:53.725865 | orchestrator | Monday 19 May 2025 22:02:02 +0000 (0:00:06.864) 0:05:47.135 ************ 2025-05-19 22:02:53.725872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.725910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725927 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.725933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-19 22:02:53.725951 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.725957 | orchestrator | 2025-05-19 22:02:53.725964 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-19 22:02:53.725970 | orchestrator | Monday 19 May 2025 22:02:03 +0000 (0:00:00.546) 0:05:47.682 ************ 2025-05-19 22:02:53.725976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.725983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.725992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.725999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726041 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726060 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-19 22:02:53.726101 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726107 | orchestrator | 2025-05-19 22:02:53.726114 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-19 22:02:53.726120 | orchestrator | Monday 19 May 2025 22:02:04 +0000 (0:00:01.264) 0:05:48.946 ************ 2025-05-19 22:02:53.726127 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.726133 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.726139 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.726145 | orchestrator | 2025-05-19 22:02:53.726152 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-19 22:02:53.726158 | orchestrator | Monday 19 May 2025 22:02:06 +0000 (0:00:01.216) 0:05:50.163 ************ 2025-05-19 22:02:53.726164 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.726170 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.726176 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.726183 | orchestrator | 2025-05-19 22:02:53.726189 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-19 22:02:53.726195 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:01.815) 0:05:51.978 ************ 2025-05-19 22:02:53.726204 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726213 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726236 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726246 | orchestrator | 2025-05-19 22:02:53.726257 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-19 22:02:53.726266 | orchestrator | Monday 19 May 2025 22:02:08 +0000 (0:00:00.287) 0:05:52.265 ************ 2025-05-19 22:02:53.726275 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726285 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726294 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726303 | orchestrator | 2025-05-19 22:02:53.726314 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-19 22:02:53.726323 | orchestrator | Monday 19 May 2025 22:02:08 +0000 (0:00:00.690) 0:05:52.956 ************ 2025-05-19 22:02:53.726333 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726344 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726351 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726358 | orchestrator | 2025-05-19 22:02:53.726364 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-19 22:02:53.726370 | orchestrator | Monday 19 May 2025 22:02:09 +0000 (0:00:00.333) 0:05:53.289 ************ 2025-05-19 22:02:53.726376 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726382 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726388 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726395 | orchestrator | 2025-05-19 22:02:53.726401 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-19 22:02:53.726407 | orchestrator | Monday 19 May 2025 22:02:09 +0000 (0:00:00.321) 0:05:53.610 ************ 2025-05-19 22:02:53.726413 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726419 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726430 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726436 | orchestrator | 2025-05-19 22:02:53.726442 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-19 22:02:53.726449 | orchestrator | Monday 19 May 2025 22:02:09 +0000 (0:00:00.319) 0:05:53.930 ************ 2025-05-19 22:02:53.726455 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726461 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726474 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726480 | orchestrator | 2025-05-19 22:02:53.726486 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-19 22:02:53.726492 | orchestrator | Monday 19 May 2025 22:02:10 +0000 (0:00:00.919) 0:05:54.850 ************ 2025-05-19 22:02:53.726498 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726505 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726511 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726517 | orchestrator | 2025-05-19 22:02:53.726523 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-19 22:02:53.726529 | orchestrator | Monday 19 May 2025 22:02:11 +0000 (0:00:00.656) 0:05:55.506 ************ 2025-05-19 22:02:53.726535 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726542 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726548 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726554 | orchestrator | 2025-05-19 22:02:53.726560 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-19 22:02:53.726566 | orchestrator | Monday 19 May 2025 22:02:11 +0000 (0:00:00.342) 0:05:55.849 ************ 2025-05-19 22:02:53.726572 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726578 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726584 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726590 | orchestrator | 2025-05-19 22:02:53.726596 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-19 22:02:53.726603 | orchestrator | Monday 19 May 2025 22:02:12 +0000 (0:00:01.160) 0:05:57.009 ************ 2025-05-19 22:02:53.726609 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726615 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726625 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726631 | orchestrator | 2025-05-19 22:02:53.726638 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-19 22:02:53.726644 | orchestrator | Monday 19 May 2025 22:02:13 +0000 (0:00:00.870) 0:05:57.880 ************ 2025-05-19 22:02:53.726650 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726656 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726662 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726668 | orchestrator | 2025-05-19 22:02:53.726674 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-19 22:02:53.726681 | orchestrator | Monday 19 May 2025 22:02:14 +0000 (0:00:00.880) 0:05:58.761 ************ 2025-05-19 22:02:53.726687 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.726693 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.726699 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.726705 | orchestrator | 2025-05-19 22:02:53.726711 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-19 22:02:53.726717 | orchestrator | Monday 19 May 2025 22:02:19 +0000 (0:00:04.700) 0:06:03.461 ************ 2025-05-19 22:02:53.726723 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726730 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726736 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726742 | orchestrator | 2025-05-19 22:02:53.726748 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-19 22:02:53.726754 | orchestrator | Monday 19 May 2025 22:02:22 +0000 (0:00:03.638) 0:06:07.100 ************ 2025-05-19 22:02:53.726760 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.726766 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.726772 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.726779 | orchestrator | 2025-05-19 22:02:53.726785 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-19 22:02:53.726791 | orchestrator | Monday 19 May 2025 22:02:31 +0000 (0:00:08.773) 0:06:15.873 ************ 2025-05-19 22:02:53.726797 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.726803 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.726809 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.726815 | orchestrator | 2025-05-19 22:02:53.726822 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-19 22:02:53.726833 | orchestrator | Monday 19 May 2025 22:02:36 +0000 (0:00:04.779) 0:06:20.653 ************ 2025-05-19 22:02:53.726839 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:02:53.726845 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:02:53.726852 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:02:53.726858 | orchestrator | 2025-05-19 22:02:53.726864 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-19 22:02:53.726870 | orchestrator | Monday 19 May 2025 22:02:46 +0000 (0:00:09.946) 0:06:30.599 ************ 2025-05-19 22:02:53.726876 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726882 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726888 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726906 | orchestrator | 2025-05-19 22:02:53.726913 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-19 22:02:53.726919 | orchestrator | Monday 19 May 2025 22:02:46 +0000 (0:00:00.357) 0:06:30.956 ************ 2025-05-19 22:02:53.726925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726931 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726937 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726943 | orchestrator | 2025-05-19 22:02:53.726949 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-19 22:02:53.726955 | orchestrator | Monday 19 May 2025 22:02:47 +0000 (0:00:00.672) 0:06:31.629 ************ 2025-05-19 22:02:53.726962 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.726968 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.726974 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.726980 | orchestrator | 2025-05-19 22:02:53.726986 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-19 22:02:53.726992 | orchestrator | Monday 19 May 2025 22:02:47 +0000 (0:00:00.357) 0:06:31.986 ************ 2025-05-19 22:02:53.726998 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.727004 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.727011 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.727017 | orchestrator | 2025-05-19 22:02:53.727026 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-19 22:02:53.727032 | orchestrator | Monday 19 May 2025 22:02:48 +0000 (0:00:00.349) 0:06:32.335 ************ 2025-05-19 22:02:53.727038 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.727045 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.727051 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.727057 | orchestrator | 2025-05-19 22:02:53.727063 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-19 22:02:53.727069 | orchestrator | Monday 19 May 2025 22:02:48 +0000 (0:00:00.309) 0:06:32.645 ************ 2025-05-19 22:02:53.727075 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:02:53.727081 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:02:53.727088 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:02:53.727094 | orchestrator | 2025-05-19 22:02:53.727100 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-19 22:02:53.727106 | orchestrator | Monday 19 May 2025 22:02:49 +0000 (0:00:00.650) 0:06:33.295 ************ 2025-05-19 22:02:53.727112 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.727118 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.727125 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.727131 | orchestrator | 2025-05-19 22:02:53.727137 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-19 22:02:53.727143 | orchestrator | Monday 19 May 2025 22:02:50 +0000 (0:00:00.882) 0:06:34.178 ************ 2025-05-19 22:02:53.727149 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:02:53.727155 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:02:53.727162 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:02:53.727168 | orchestrator | 2025-05-19 22:02:53.727174 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:02:53.727180 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 22:02:53.727194 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 22:02:53.727200 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-19 22:02:53.727207 | orchestrator | 2025-05-19 22:02:53.727213 | orchestrator | 2025-05-19 22:02:53.727219 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:02:53.727225 | orchestrator | Monday 19 May 2025 22:02:50 +0000 (0:00:00.811) 0:06:34.990 ************ 2025-05-19 22:02:53.727231 | orchestrator | =============================================================================== 2025-05-19 22:02:53.727237 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.95s 2025-05-19 22:02:53.727244 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.77s 2025-05-19 22:02:53.727250 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.69s 2025-05-19 22:02:53.727256 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.86s 2025-05-19 22:02:53.727262 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.49s 2025-05-19 22:02:53.727268 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.02s 2025-05-19 22:02:53.727274 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.78s 2025-05-19 22:02:53.727280 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.70s 2025-05-19 22:02:53.727286 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.56s 2025-05-19 22:02:53.727292 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.55s 2025-05-19 22:02:53.727299 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.48s 2025-05-19 22:02:53.727305 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.42s 2025-05-19 22:02:53.727311 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.35s 2025-05-19 22:02:53.727317 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.33s 2025-05-19 22:02:53.727323 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.25s 2025-05-19 22:02:53.727329 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.21s 2025-05-19 22:02:53.727336 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.20s 2025-05-19 22:02:53.727342 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.16s 2025-05-19 22:02:53.727348 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.94s 2025-05-19 22:02:53.727354 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.93s 2025-05-19 22:02:53.727360 | orchestrator | 2025-05-19 22:02:53 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:02:53.727366 | orchestrator | 2025-05-19 22:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:56.769569 | orchestrator | 2025-05-19 22:02:56 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:02:56.771361 | orchestrator | 2025-05-19 22:02:56 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:56.775212 | orchestrator | 2025-05-19 22:02:56 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:02:56.775248 | orchestrator | 2025-05-19 22:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:02:59.823871 | orchestrator | 2025-05-19 22:02:59 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:02:59.824394 | orchestrator | 2025-05-19 22:02:59 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:02:59.827255 | orchestrator | 2025-05-19 22:02:59 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:02:59.827297 | orchestrator | 2025-05-19 22:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:02.878329 | orchestrator | 2025-05-19 22:03:02 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:02.880426 | orchestrator | 2025-05-19 22:03:02 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:02.880701 | orchestrator | 2025-05-19 22:03:02 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:02.880730 | orchestrator | 2025-05-19 22:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:05.918882 | orchestrator | 2025-05-19 22:03:05 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:05.921396 | orchestrator | 2025-05-19 22:03:05 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:05.923644 | orchestrator | 2025-05-19 22:03:05 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:05.923678 | orchestrator | 2025-05-19 22:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:08.954409 | orchestrator | 2025-05-19 22:03:08 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:08.955580 | orchestrator | 2025-05-19 22:03:08 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:08.956523 | orchestrator | 2025-05-19 22:03:08 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:08.956541 | orchestrator | 2025-05-19 22:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:11.999512 | orchestrator | 2025-05-19 22:03:11 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:12.000922 | orchestrator | 2025-05-19 22:03:11 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:12.012480 | orchestrator | 2025-05-19 22:03:12 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:12.012525 | orchestrator | 2025-05-19 22:03:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:15.060207 | orchestrator | 2025-05-19 22:03:15 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:15.061257 | orchestrator | 2025-05-19 22:03:15 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:15.061895 | orchestrator | 2025-05-19 22:03:15 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:15.061921 | orchestrator | 2025-05-19 22:03:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:18.099801 | orchestrator | 2025-05-19 22:03:18 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:18.100097 | orchestrator | 2025-05-19 22:03:18 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:18.102083 | orchestrator | 2025-05-19 22:03:18 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:18.102175 | orchestrator | 2025-05-19 22:03:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:21.135117 | orchestrator | 2025-05-19 22:03:21 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:21.135470 | orchestrator | 2025-05-19 22:03:21 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:21.136260 | orchestrator | 2025-05-19 22:03:21 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:21.136465 | orchestrator | 2025-05-19 22:03:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:24.196947 | orchestrator | 2025-05-19 22:03:24 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:24.203351 | orchestrator | 2025-05-19 22:03:24 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:24.203450 | orchestrator | 2025-05-19 22:03:24 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:24.203486 | orchestrator | 2025-05-19 22:03:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:27.244494 | orchestrator | 2025-05-19 22:03:27 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:27.246571 | orchestrator | 2025-05-19 22:03:27 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:27.248473 | orchestrator | 2025-05-19 22:03:27 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:27.248684 | orchestrator | 2025-05-19 22:03:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:30.293487 | orchestrator | 2025-05-19 22:03:30 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:30.295914 | orchestrator | 2025-05-19 22:03:30 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:30.298590 | orchestrator | 2025-05-19 22:03:30 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:30.298625 | orchestrator | 2025-05-19 22:03:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:33.345821 | orchestrator | 2025-05-19 22:03:33 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:33.345981 | orchestrator | 2025-05-19 22:03:33 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:33.347126 | orchestrator | 2025-05-19 22:03:33 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:33.347163 | orchestrator | 2025-05-19 22:03:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:36.398483 | orchestrator | 2025-05-19 22:03:36 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:36.398727 | orchestrator | 2025-05-19 22:03:36 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:36.402555 | orchestrator | 2025-05-19 22:03:36 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:36.402620 | orchestrator | 2025-05-19 22:03:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:39.449871 | orchestrator | 2025-05-19 22:03:39 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:39.450279 | orchestrator | 2025-05-19 22:03:39 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:39.451334 | orchestrator | 2025-05-19 22:03:39 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:39.451439 | orchestrator | 2025-05-19 22:03:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:42.496898 | orchestrator | 2025-05-19 22:03:42 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:42.504195 | orchestrator | 2025-05-19 22:03:42 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:42.505253 | orchestrator | 2025-05-19 22:03:42 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:42.505412 | orchestrator | 2025-05-19 22:03:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:45.554603 | orchestrator | 2025-05-19 22:03:45 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:45.555411 | orchestrator | 2025-05-19 22:03:45 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:45.556747 | orchestrator | 2025-05-19 22:03:45 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:45.556781 | orchestrator | 2025-05-19 22:03:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:48.607332 | orchestrator | 2025-05-19 22:03:48 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:48.610600 | orchestrator | 2025-05-19 22:03:48 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:48.613486 | orchestrator | 2025-05-19 22:03:48 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:48.613514 | orchestrator | 2025-05-19 22:03:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:51.658741 | orchestrator | 2025-05-19 22:03:51 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:51.663171 | orchestrator | 2025-05-19 22:03:51 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:51.663208 | orchestrator | 2025-05-19 22:03:51 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:51.663238 | orchestrator | 2025-05-19 22:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:54.716219 | orchestrator | 2025-05-19 22:03:54 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:54.721378 | orchestrator | 2025-05-19 22:03:54 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:54.723413 | orchestrator | 2025-05-19 22:03:54 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:54.723462 | orchestrator | 2025-05-19 22:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:03:57.783816 | orchestrator | 2025-05-19 22:03:57 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:03:57.783923 | orchestrator | 2025-05-19 22:03:57 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:03:57.787025 | orchestrator | 2025-05-19 22:03:57 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:03:57.787055 | orchestrator | 2025-05-19 22:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:00.825227 | orchestrator | 2025-05-19 22:04:00 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:00.826980 | orchestrator | 2025-05-19 22:04:00 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:00.828764 | orchestrator | 2025-05-19 22:04:00 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:00.828797 | orchestrator | 2025-05-19 22:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:03.889812 | orchestrator | 2025-05-19 22:04:03 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:03.892372 | orchestrator | 2025-05-19 22:04:03 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:03.895914 | orchestrator | 2025-05-19 22:04:03 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:03.896002 | orchestrator | 2025-05-19 22:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:06.946101 | orchestrator | 2025-05-19 22:04:06 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:06.946205 | orchestrator | 2025-05-19 22:04:06 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:06.947021 | orchestrator | 2025-05-19 22:04:06 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:06.947048 | orchestrator | 2025-05-19 22:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:10.014416 | orchestrator | 2025-05-19 22:04:10 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:10.016115 | orchestrator | 2025-05-19 22:04:10 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:10.018383 | orchestrator | 2025-05-19 22:04:10 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:10.018410 | orchestrator | 2025-05-19 22:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:13.071564 | orchestrator | 2025-05-19 22:04:13 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:13.072527 | orchestrator | 2025-05-19 22:04:13 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:13.074971 | orchestrator | 2025-05-19 22:04:13 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:13.075019 | orchestrator | 2025-05-19 22:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:16.127990 | orchestrator | 2025-05-19 22:04:16 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:16.129550 | orchestrator | 2025-05-19 22:04:16 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:16.131101 | orchestrator | 2025-05-19 22:04:16 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:16.131280 | orchestrator | 2025-05-19 22:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:19.179236 | orchestrator | 2025-05-19 22:04:19 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:19.180209 | orchestrator | 2025-05-19 22:04:19 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:19.181765 | orchestrator | 2025-05-19 22:04:19 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:19.181834 | orchestrator | 2025-05-19 22:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:22.226594 | orchestrator | 2025-05-19 22:04:22 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:22.227771 | orchestrator | 2025-05-19 22:04:22 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:22.230184 | orchestrator | 2025-05-19 22:04:22 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:22.230296 | orchestrator | 2025-05-19 22:04:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:25.271554 | orchestrator | 2025-05-19 22:04:25 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:25.271836 | orchestrator | 2025-05-19 22:04:25 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:25.272856 | orchestrator | 2025-05-19 22:04:25 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:25.272966 | orchestrator | 2025-05-19 22:04:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:28.334515 | orchestrator | 2025-05-19 22:04:28 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:28.336539 | orchestrator | 2025-05-19 22:04:28 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:28.343925 | orchestrator | 2025-05-19 22:04:28 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:28.343968 | orchestrator | 2025-05-19 22:04:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:31.400255 | orchestrator | 2025-05-19 22:04:31 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:31.402127 | orchestrator | 2025-05-19 22:04:31 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:31.403855 | orchestrator | 2025-05-19 22:04:31 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:31.404081 | orchestrator | 2025-05-19 22:04:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:34.447097 | orchestrator | 2025-05-19 22:04:34 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:34.449169 | orchestrator | 2025-05-19 22:04:34 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:34.451272 | orchestrator | 2025-05-19 22:04:34 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:34.451327 | orchestrator | 2025-05-19 22:04:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:37.503056 | orchestrator | 2025-05-19 22:04:37 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:37.505011 | orchestrator | 2025-05-19 22:04:37 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:37.507729 | orchestrator | 2025-05-19 22:04:37 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:37.507766 | orchestrator | 2025-05-19 22:04:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:40.557437 | orchestrator | 2025-05-19 22:04:40 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:40.558849 | orchestrator | 2025-05-19 22:04:40 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:40.559390 | orchestrator | 2025-05-19 22:04:40 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:40.560174 | orchestrator | 2025-05-19 22:04:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:43.601153 | orchestrator | 2025-05-19 22:04:43 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:43.602899 | orchestrator | 2025-05-19 22:04:43 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state STARTED 2025-05-19 22:04:43.604776 | orchestrator | 2025-05-19 22:04:43 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:43.604840 | orchestrator | 2025-05-19 22:04:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:46.664264 | orchestrator | 2025-05-19 22:04:46 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:46.669801 | orchestrator | 2025-05-19 22:04:46 | INFO  | Task ea0e1f68-7f1d-4238-8899-1a88afe975cb is in state SUCCESS 2025-05-19 22:04:46.675325 | orchestrator | 2025-05-19 22:04:46.675455 | orchestrator | 2025-05-19 22:04:46.675472 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-19 22:04:46.675486 | orchestrator | 2025-05-19 22:04:46.675498 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-19 22:04:46.675566 | orchestrator | Monday 19 May 2025 21:53:38 +0000 (0:00:00.831) 0:00:00.831 ************ 2025-05-19 22:04:46.675580 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.675682 | orchestrator | 2025-05-19 22:04:46.675696 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-19 22:04:46.675707 | orchestrator | Monday 19 May 2025 21:53:39 +0000 (0:00:01.198) 0:00:02.029 ************ 2025-05-19 22:04:46.675718 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.675730 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.675741 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.675752 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.675763 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.675774 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.675785 | orchestrator | 2025-05-19 22:04:46.675796 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-19 22:04:46.675807 | orchestrator | Monday 19 May 2025 21:53:40 +0000 (0:00:01.470) 0:00:03.500 ************ 2025-05-19 22:04:46.675818 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.675829 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.675839 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.675850 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.675861 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.675871 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.675882 | orchestrator | 2025-05-19 22:04:46.675895 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-19 22:04:46.675907 | orchestrator | Monday 19 May 2025 21:53:41 +0000 (0:00:01.067) 0:00:04.568 ************ 2025-05-19 22:04:46.675920 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.675932 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.675944 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.675957 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.675969 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.675981 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.675994 | orchestrator | 2025-05-19 22:04:46.676006 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-19 22:04:46.676019 | orchestrator | Monday 19 May 2025 21:53:42 +0000 (0:00:00.921) 0:00:05.489 ************ 2025-05-19 22:04:46.676031 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.676042 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.676054 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.676066 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.676078 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.676132 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.676173 | orchestrator | 2025-05-19 22:04:46.676187 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-19 22:04:46.676237 | orchestrator | Monday 19 May 2025 21:53:43 +0000 (0:00:00.659) 0:00:06.149 ************ 2025-05-19 22:04:46.676250 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.676312 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.676324 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.676335 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.676346 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.676357 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.676367 | orchestrator | 2025-05-19 22:04:46.676378 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-19 22:04:46.676389 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.519) 0:00:06.668 ************ 2025-05-19 22:04:46.676400 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.676418 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.676435 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.676453 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.676474 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.676694 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.676712 | orchestrator | 2025-05-19 22:04:46.676730 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 22:04:46.676755 | orchestrator | Monday 19 May 2025 21:53:44 +0000 (0:00:00.732) 0:00:07.401 ************ 2025-05-19 22:04:46.676767 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.676779 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.676790 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.676801 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.676812 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.676822 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.676833 | orchestrator | 2025-05-19 22:04:46.676844 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 22:04:46.676855 | orchestrator | Monday 19 May 2025 21:53:45 +0000 (0:00:01.015) 0:00:08.416 ************ 2025-05-19 22:04:46.676866 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.676876 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.676887 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.676898 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.676938 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.677042 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.677054 | orchestrator | 2025-05-19 22:04:46.677065 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 22:04:46.677122 | orchestrator | Monday 19 May 2025 21:53:46 +0000 (0:00:01.074) 0:00:09.490 ************ 2025-05-19 22:04:46.677174 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.677189 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.677201 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.677211 | orchestrator | 2025-05-19 22:04:46.677259 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-19 22:04:46.677272 | orchestrator | Monday 19 May 2025 21:53:47 +0000 (0:00:00.721) 0:00:10.211 ************ 2025-05-19 22:04:46.677283 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.677293 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.677304 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.677315 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.677325 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.677336 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.677346 | orchestrator | 2025-05-19 22:04:46.677502 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-19 22:04:46.677516 | orchestrator | Monday 19 May 2025 21:53:48 +0000 (0:00:00.988) 0:00:11.200 ************ 2025-05-19 22:04:46.677574 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.677596 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.677608 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.677619 | orchestrator | 2025-05-19 22:04:46.677630 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-19 22:04:46.677641 | orchestrator | Monday 19 May 2025 21:53:51 +0000 (0:00:02.922) 0:00:14.122 ************ 2025-05-19 22:04:46.677652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.677663 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.677673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.677684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.677695 | orchestrator | 2025-05-19 22:04:46.677706 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-19 22:04:46.677717 | orchestrator | Monday 19 May 2025 21:53:52 +0000 (0:00:00.610) 0:00:14.733 ************ 2025-05-19 22:04:46.677730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677745 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677767 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677778 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.677789 | orchestrator | 2025-05-19 22:04:46.677800 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-19 22:04:46.677811 | orchestrator | Monday 19 May 2025 21:53:53 +0000 (0:00:01.233) 0:00:15.966 ************ 2025-05-19 22:04:46.677824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677838 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677850 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677861 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.677872 | orchestrator | 2025-05-19 22:04:46.677883 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-19 22:04:46.677894 | orchestrator | Monday 19 May 2025 21:53:53 +0000 (0:00:00.149) 0:00:16.115 ************ 2025-05-19 22:04:46.677907 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 21:53:49.182755', 'end': '2025-05-19 21:53:49.465984', 'delta': '0:00:00.283229', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677931 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 21:53:50.191712', 'end': '2025-05-19 21:53:50.456019', 'delta': '0:00:00.264307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677944 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 21:53:50.923124', 'end': '2025-05-19 21:53:51.230078', 'delta': '0:00:00.306954', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.677962 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.677973 | orchestrator | 2025-05-19 22:04:46.677984 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-19 22:04:46.677995 | orchestrator | Monday 19 May 2025 21:53:53 +0000 (0:00:00.210) 0:00:16.326 ************ 2025-05-19 22:04:46.678006 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.678090 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.678102 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.678113 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.678248 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.678269 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.678287 | orchestrator | 2025-05-19 22:04:46.678340 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-19 22:04:46.678352 | orchestrator | Monday 19 May 2025 21:53:55 +0000 (0:00:01.824) 0:00:18.150 ************ 2025-05-19 22:04:46.678363 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.678374 | orchestrator | 2025-05-19 22:04:46.678385 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-19 22:04:46.678396 | orchestrator | Monday 19 May 2025 21:53:56 +0000 (0:00:00.679) 0:00:18.830 ************ 2025-05-19 22:04:46.678407 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678419 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.678438 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.678494 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.678508 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.678519 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.678557 | orchestrator | 2025-05-19 22:04:46.678576 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-19 22:04:46.678595 | orchestrator | Monday 19 May 2025 21:53:57 +0000 (0:00:01.336) 0:00:20.166 ************ 2025-05-19 22:04:46.678615 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678632 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.678646 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.678657 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.678668 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.678679 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.678690 | orchestrator | 2025-05-19 22:04:46.678700 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 22:04:46.678712 | orchestrator | Monday 19 May 2025 21:53:58 +0000 (0:00:01.423) 0:00:21.590 ************ 2025-05-19 22:04:46.678722 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678733 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.678744 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.678755 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.678766 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.678777 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.678787 | orchestrator | 2025-05-19 22:04:46.678798 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-19 22:04:46.678809 | orchestrator | Monday 19 May 2025 21:54:00 +0000 (0:00:01.149) 0:00:22.740 ************ 2025-05-19 22:04:46.678820 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678831 | orchestrator | 2025-05-19 22:04:46.678841 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-19 22:04:46.678864 | orchestrator | Monday 19 May 2025 21:54:00 +0000 (0:00:00.164) 0:00:22.905 ************ 2025-05-19 22:04:46.678875 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678886 | orchestrator | 2025-05-19 22:04:46.678897 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 22:04:46.678908 | orchestrator | Monday 19 May 2025 21:54:00 +0000 (0:00:00.175) 0:00:23.080 ************ 2025-05-19 22:04:46.678919 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.678930 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.678941 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.678951 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.678962 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.678973 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.678984 | orchestrator | 2025-05-19 22:04:46.678995 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-19 22:04:46.679019 | orchestrator | Monday 19 May 2025 21:54:01 +0000 (0:00:00.692) 0:00:23.773 ************ 2025-05-19 22:04:46.679030 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679041 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679052 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679068 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679080 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679091 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679101 | orchestrator | 2025-05-19 22:04:46.679112 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-19 22:04:46.679124 | orchestrator | Monday 19 May 2025 21:54:02 +0000 (0:00:01.315) 0:00:25.089 ************ 2025-05-19 22:04:46.679134 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679145 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679156 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679167 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679178 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679189 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679199 | orchestrator | 2025-05-19 22:04:46.679210 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-19 22:04:46.679221 | orchestrator | Monday 19 May 2025 21:54:03 +0000 (0:00:01.073) 0:00:26.163 ************ 2025-05-19 22:04:46.679232 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679243 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679254 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679265 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679276 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679287 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679297 | orchestrator | 2025-05-19 22:04:46.679308 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 22:04:46.679320 | orchestrator | Monday 19 May 2025 21:54:04 +0000 (0:00:01.126) 0:00:27.289 ************ 2025-05-19 22:04:46.679331 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679342 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679353 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679364 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679374 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679385 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679396 | orchestrator | 2025-05-19 22:04:46.679407 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-19 22:04:46.679419 | orchestrator | Monday 19 May 2025 21:54:05 +0000 (0:00:00.709) 0:00:27.998 ************ 2025-05-19 22:04:46.679430 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679441 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679451 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679462 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679473 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679484 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679503 | orchestrator | 2025-05-19 22:04:46.679514 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 22:04:46.679525 | orchestrator | Monday 19 May 2025 21:54:06 +0000 (0:00:00.983) 0:00:28.981 ************ 2025-05-19 22:04:46.679560 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679571 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.679582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.679593 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.679604 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.679614 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.679625 | orchestrator | 2025-05-19 22:04:46.679636 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-19 22:04:46.679647 | orchestrator | Monday 19 May 2025 21:54:07 +0000 (0:00:00.827) 0:00:29.808 ************ 2025-05-19 22:04:46.679659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part1', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part14', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part15', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part16', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.679819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.679833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.679946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.679969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.679981 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.679992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680037 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.680060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part1', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part14', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part15', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part16', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680150 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.680162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a', 'dm-uuid-LVM-CW4c3NGDdo1fwdkbiKJIdjjJJdnMVj1UxTnxsVSsTxcZWGST2UJuuMus20xQFxB6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f', 'dm-uuid-LVM-25Ux91xuT7WiMrBFdwOi1pMwenBIWeCBeiRM36oZY1JX4ZJkb0b2c1NOPE20V9v0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52', 'dm-uuid-LVM-yGlbKPYLW6DemIsqRYBfWpD8tvVVslaOYqa3UfTOaNStRqSocsB08xBr6Ha7N511'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9', 'dm-uuid-LVM-8aMxDAC69wHx71dcpG20Q31tCBflhRmBrlxbrompEEIQX7YSUfTqlUZ2yqTkpcnk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680329 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l7ssem-IdnE-BWTE-0Yd7-3cX8-jALR-GFmCDt', 'scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314', 'scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdbZd6-MD3W-nwco-pvWy-uPaG-COz4-ILzeqO', 'scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8', 'scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hcS6Z0-3dbh-FsUe-6Hl6-0vR4-DzK8-1zTlE6', 'scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d', 'scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-snlF3m-Oi4j-I3Sj-YZaO-WcHw-xu2a-V6LWrW', 'scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305', 'scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261', 'scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3', 'scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b', 'dm-uuid-LVM-OqlL2uEqafAGX9iIr2ntluuztK7fkD1t3vrp0n4U7NcdnIkhJg8R2DeijZ9Lmols'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680702 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.680714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680725 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8', 'dm-uuid-LVM-LoMld4gp88uLIixd8sShMrFgxLTqn5lNXnvbLdxHRCJfDqyPk00c82b0G8aSRO5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680767 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.680778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:04:46.680866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jGuMqO-44Uk-XNOS-pHB5-fCAH-wQzA-HW6kvE', 'scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70', 'scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dsUu3E-N4Ci-8cAW-iChd-BvQ3-heId-OUpQXI', 'scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6', 'scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba', 'scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:04:46.680937 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.680948 | orchestrator | 2025-05-19 22:04:46.680958 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 22:04:46.680973 | orchestrator | Monday 19 May 2025 21:54:08 +0000 (0:00:01.263) 0:00:31.072 ************ 2025-05-19 22:04:46.680983 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.680994 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681004 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681024 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681040 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681060 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681071 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681081 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681091 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681102 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681121 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681137 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681151 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681162 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681173 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part1', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part14', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part15', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part16', 'scsi-SQEMU_QEMU_HARDDISK_67cb46d1-5531-46d1-bade-650e39df9630-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681199 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681215 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681243 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part1', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part14', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part15', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part16', 'scsi-SQEMU_QEMU_HARDDISK_5bf1eb9f-1044-4b2d-b10a-96a4760b0d61-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681261 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681281 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681302 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681339 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681567 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681593 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681604 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part1', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part14', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part15', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part16', 'scsi-SQEMU_QEMU_HARDDISK_9de35ea6-b803-49bc-8b65-554f85c20f06-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681624 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.681635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f', 'dm-uuid-LVM-25Ux91xuT7WiMrBFdwOi1pMwenBIWeCBeiRM36oZY1JX4ZJkb0b2c1NOPE20V9v0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9', 'dm-uuid-LVM-8aMxDAC69wHx71dcpG20Q31tCBflhRmBrlxbrompEEIQX7YSUfTqlUZ2yqTkpcnk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681726 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.681746 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681767 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l7ssem-IdnE-BWTE-0Yd7-3cX8-jALR-GFmCDt', 'scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314', 'scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681828 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a', 'dm-uuid-LVM-CW4c3NGDdo1fwdkbiKJIdjjJJdnMVj1UxTnxsVSsTxcZWGST2UJuuMus20xQFxB6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681845 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hcS6Z0-3dbh-FsUe-6Hl6-0vR4-DzK8-1zTlE6', 'scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d', 'scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681855 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.681865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261', 'scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52', 'dm-uuid-LVM-yGlbKPYLW6DemIsqRYBfWpD8tvVVslaOYqa3UfTOaNStRqSocsB08xBr6Ha7N511'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681910 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.681920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b', 'dm-uuid-LVM-OqlL2uEqafAGX9iIr2ntluuztK7fkD1t3vrp0n4U7NcdnIkhJg8R2DeijZ9Lmols'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8', 'dm-uuid-LVM-LoMld4gp88uLIixd8sShMrFgxLTqn5lNXnvbLdxHRCJfDqyPk00c82b0G8aSRO5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.681997 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682109 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682138 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682208 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdbZd6-MD3W-nwco-pvWy-uPaG-COz4-ILzeqO', 'scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8', 'scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682243 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-snlF3m-Oi4j-I3Sj-YZaO-WcHw-xu2a-V6LWrW', 'scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305', 'scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jGuMqO-44Uk-XNOS-pHB5-fCAH-wQzA-HW6kvE', 'scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70', 'scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dsUu3E-N4Ci-8cAW-iChd-BvQ3-heId-OUpQXI', 'scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6', 'scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3', 'scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba', 'scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682353 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.682365 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:04:46.682376 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.682387 | orchestrator | 2025-05-19 22:04:46.682399 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-19 22:04:46.682409 | orchestrator | Monday 19 May 2025 21:54:10 +0000 (0:00:01.979) 0:00:33.053 ************ 2025-05-19 22:04:46.682419 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.682429 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.682439 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.682453 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.682463 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.682473 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.682482 | orchestrator | 2025-05-19 22:04:46.682492 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-19 22:04:46.682506 | orchestrator | Monday 19 May 2025 21:54:11 +0000 (0:00:01.159) 0:00:34.212 ************ 2025-05-19 22:04:46.682516 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.682525 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.682561 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.682586 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.682603 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.682618 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.682632 | orchestrator | 2025-05-19 22:04:46.682642 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 22:04:46.682652 | orchestrator | Monday 19 May 2025 21:54:12 +0000 (0:00:00.545) 0:00:34.758 ************ 2025-05-19 22:04:46.682661 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.682671 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.682681 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.682691 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.682701 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.682710 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.682720 | orchestrator | 2025-05-19 22:04:46.682730 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 22:04:46.682740 | orchestrator | Monday 19 May 2025 21:54:13 +0000 (0:00:01.079) 0:00:35.837 ************ 2025-05-19 22:04:46.682749 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.682759 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.682769 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.682778 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.682788 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.682798 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.682807 | orchestrator | 2025-05-19 22:04:46.682817 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 22:04:46.682827 | orchestrator | Monday 19 May 2025 21:54:13 +0000 (0:00:00.598) 0:00:36.435 ************ 2025-05-19 22:04:46.682836 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.682846 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.682856 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.682865 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.682875 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.682884 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.682894 | orchestrator | 2025-05-19 22:04:46.682904 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 22:04:46.682913 | orchestrator | Monday 19 May 2025 21:54:14 +0000 (0:00:00.823) 0:00:37.259 ************ 2025-05-19 22:04:46.682923 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.682933 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.682942 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.682952 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.682961 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.682971 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.682981 | orchestrator | 2025-05-19 22:04:46.682990 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-19 22:04:46.683000 | orchestrator | Monday 19 May 2025 21:54:15 +0000 (0:00:01.059) 0:00:38.318 ************ 2025-05-19 22:04:46.683010 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.683020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 22:04:46.683029 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-19 22:04:46.683039 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-19 22:04:46.683048 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 22:04:46.683058 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-19 22:04:46.683067 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-19 22:04:46.683077 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 22:04:46.683086 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-19 22:04:46.683096 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 22:04:46.683105 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 22:04:46.683115 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 22:04:46.683125 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-19 22:04:46.683141 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 22:04:46.683151 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 22:04:46.683160 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 22:04:46.683170 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 22:04:46.683180 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 22:04:46.683189 | orchestrator | 2025-05-19 22:04:46.683199 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-19 22:04:46.683208 | orchestrator | Monday 19 May 2025 21:54:18 +0000 (0:00:03.215) 0:00:41.534 ************ 2025-05-19 22:04:46.683218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.683228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.683237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.683247 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.683256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-19 22:04:46.683266 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-19 22:04:46.683275 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-19 22:04:46.683285 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.683295 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-19 22:04:46.683305 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-19 22:04:46.683314 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-19 22:04:46.683324 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.683340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 22:04:46.683350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 22:04:46.683360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 22:04:46.683369 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683388 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 22:04:46.683397 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 22:04:46.683407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 22:04:46.683417 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.683426 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 22:04:46.683436 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 22:04:46.683446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 22:04:46.683455 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.683465 | orchestrator | 2025-05-19 22:04:46.683475 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-19 22:04:46.683485 | orchestrator | Monday 19 May 2025 21:54:19 +0000 (0:00:00.544) 0:00:42.079 ************ 2025-05-19 22:04:46.683495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.683504 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.683514 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.683524 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.683594 | orchestrator | 2025-05-19 22:04:46.683605 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 22:04:46.683615 | orchestrator | Monday 19 May 2025 21:54:20 +0000 (0:00:01.015) 0:00:43.095 ************ 2025-05-19 22:04:46.683625 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683634 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.683644 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.683653 | orchestrator | 2025-05-19 22:04:46.683663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 22:04:46.683673 | orchestrator | Monday 19 May 2025 21:54:20 +0000 (0:00:00.476) 0:00:43.571 ************ 2025-05-19 22:04:46.683690 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683699 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.683709 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.683718 | orchestrator | 2025-05-19 22:04:46.683728 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 22:04:46.683738 | orchestrator | Monday 19 May 2025 21:54:21 +0000 (0:00:00.489) 0:00:44.060 ************ 2025-05-19 22:04:46.683747 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683757 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.683766 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.683776 | orchestrator | 2025-05-19 22:04:46.683786 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 22:04:46.683795 | orchestrator | Monday 19 May 2025 21:54:21 +0000 (0:00:00.290) 0:00:44.351 ************ 2025-05-19 22:04:46.683805 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.683815 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.683824 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.683834 | orchestrator | 2025-05-19 22:04:46.683844 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 22:04:46.683854 | orchestrator | Monday 19 May 2025 21:54:22 +0000 (0:00:00.622) 0:00:44.974 ************ 2025-05-19 22:04:46.683863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.683873 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.683883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.683892 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683902 | orchestrator | 2025-05-19 22:04:46.683911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 22:04:46.683921 | orchestrator | Monday 19 May 2025 21:54:22 +0000 (0:00:00.583) 0:00:45.558 ************ 2025-05-19 22:04:46.683931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.683940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.683950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.683960 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.683970 | orchestrator | 2025-05-19 22:04:46.683979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 22:04:46.683989 | orchestrator | Monday 19 May 2025 21:54:23 +0000 (0:00:00.388) 0:00:45.946 ************ 2025-05-19 22:04:46.683999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.684009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.684018 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.684027 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.684037 | orchestrator | 2025-05-19 22:04:46.684047 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 22:04:46.684057 | orchestrator | Monday 19 May 2025 21:54:24 +0000 (0:00:00.969) 0:00:46.916 ************ 2025-05-19 22:04:46.684066 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.684076 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.684086 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.684095 | orchestrator | 2025-05-19 22:04:46.684105 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 22:04:46.684114 | orchestrator | Monday 19 May 2025 21:54:25 +0000 (0:00:00.687) 0:00:47.603 ************ 2025-05-19 22:04:46.684121 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 22:04:46.684129 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 22:04:46.684137 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 22:04:46.684145 | orchestrator | 2025-05-19 22:04:46.684153 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-19 22:04:46.684161 | orchestrator | Monday 19 May 2025 21:54:25 +0000 (0:00:00.449) 0:00:48.053 ************ 2025-05-19 22:04:46.684180 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.684189 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.684203 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.684211 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 22:04:46.684219 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 22:04:46.684226 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 22:04:46.684234 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 22:04:46.684242 | orchestrator | 2025-05-19 22:04:46.684250 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-19 22:04:46.684258 | orchestrator | Monday 19 May 2025 21:54:26 +0000 (0:00:00.608) 0:00:48.661 ************ 2025-05-19 22:04:46.684266 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.684274 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.684281 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.684289 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-19 22:04:46.684297 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 22:04:46.684305 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 22:04:46.684313 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 22:04:46.684321 | orchestrator | 2025-05-19 22:04:46.684329 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.684337 | orchestrator | Monday 19 May 2025 21:54:28 +0000 (0:00:02.379) 0:00:51.041 ************ 2025-05-19 22:04:46.684345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.684353 | orchestrator | 2025-05-19 22:04:46.684361 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.684369 | orchestrator | Monday 19 May 2025 21:54:29 +0000 (0:00:01.149) 0:00:52.190 ************ 2025-05-19 22:04:46.684378 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.684386 | orchestrator | 2025-05-19 22:04:46.684394 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.684402 | orchestrator | Monday 19 May 2025 21:54:30 +0000 (0:00:01.276) 0:00:53.467 ************ 2025-05-19 22:04:46.684410 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.684418 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.684426 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.684434 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.684442 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.684449 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.684458 | orchestrator | 2025-05-19 22:04:46.684465 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.684473 | orchestrator | Monday 19 May 2025 21:54:31 +0000 (0:00:00.788) 0:00:54.256 ************ 2025-05-19 22:04:46.684481 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.684489 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.684497 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.684505 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.684513 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.684521 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.684546 | orchestrator | 2025-05-19 22:04:46.684561 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.684570 | orchestrator | Monday 19 May 2025 21:54:33 +0000 (0:00:01.369) 0:00:55.625 ************ 2025-05-19 22:04:46.684578 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.684586 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.684593 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.684601 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.684609 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.684616 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.684624 | orchestrator | 2025-05-19 22:04:46.684632 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.684640 | orchestrator | Monday 19 May 2025 21:54:33 +0000 (0:00:00.964) 0:00:56.590 ************ 2025-05-19 22:04:46.684647 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.684655 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.684663 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.684671 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.684679 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.684687 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.684694 | orchestrator | 2025-05-19 22:04:46.684702 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.684710 | orchestrator | Monday 19 May 2025 21:54:35 +0000 (0:00:01.220) 0:00:57.811 ************ 2025-05-19 22:04:46.684718 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.684726 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.684733 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.684741 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.684749 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.684757 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.684765 | orchestrator | 2025-05-19 22:04:46.684773 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.684781 | orchestrator | Monday 19 May 2025 21:54:36 +0000 (0:00:00.933) 0:00:58.744 ************ 2025-05-19 22:04:46.684793 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.684801 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.684809 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.684817 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.684825 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.684836 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.684844 | orchestrator | 2025-05-19 22:04:46.684852 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.684860 | orchestrator | Monday 19 May 2025 21:54:36 +0000 (0:00:00.612) 0:00:59.357 ************ 2025-05-19 22:04:46.684868 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.684875 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.684883 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.684891 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.684899 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.684906 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.684914 | orchestrator | 2025-05-19 22:04:46.684922 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.684929 | orchestrator | Monday 19 May 2025 21:54:37 +0000 (0:00:00.743) 0:01:00.101 ************ 2025-05-19 22:04:46.684937 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.684945 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.684953 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.684960 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.684968 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.684976 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.684984 | orchestrator | 2025-05-19 22:04:46.684992 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.684999 | orchestrator | Monday 19 May 2025 21:54:38 +0000 (0:00:01.131) 0:01:01.233 ************ 2025-05-19 22:04:46.685007 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.685020 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.685028 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.685036 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685043 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685051 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685059 | orchestrator | 2025-05-19 22:04:46.685067 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.685075 | orchestrator | Monday 19 May 2025 21:54:40 +0000 (0:00:01.585) 0:01:02.819 ************ 2025-05-19 22:04:46.685083 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685091 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685099 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685106 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.685114 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.685122 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.685130 | orchestrator | 2025-05-19 22:04:46.685137 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.685145 | orchestrator | Monday 19 May 2025 21:54:41 +0000 (0:00:01.019) 0:01:03.838 ************ 2025-05-19 22:04:46.685153 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.685161 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.685169 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.685177 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.685185 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.685192 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.685200 | orchestrator | 2025-05-19 22:04:46.685208 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.685216 | orchestrator | Monday 19 May 2025 21:54:42 +0000 (0:00:01.241) 0:01:05.080 ************ 2025-05-19 22:04:46.685224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685231 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685239 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685247 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685255 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685263 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685270 | orchestrator | 2025-05-19 22:04:46.685278 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.685286 | orchestrator | Monday 19 May 2025 21:54:43 +0000 (0:00:00.617) 0:01:05.698 ************ 2025-05-19 22:04:46.685294 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685302 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685310 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685317 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685325 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685333 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685341 | orchestrator | 2025-05-19 22:04:46.685349 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.685357 | orchestrator | Monday 19 May 2025 21:54:44 +0000 (0:00:01.090) 0:01:06.789 ************ 2025-05-19 22:04:46.685365 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685372 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685380 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685388 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685396 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685403 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685411 | orchestrator | 2025-05-19 22:04:46.685419 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.685427 | orchestrator | Monday 19 May 2025 21:54:44 +0000 (0:00:00.601) 0:01:07.390 ************ 2025-05-19 22:04:46.685435 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685443 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685450 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685458 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.685466 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.685474 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.685486 | orchestrator | 2025-05-19 22:04:46.685494 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.685502 | orchestrator | Monday 19 May 2025 21:54:45 +0000 (0:00:00.942) 0:01:08.333 ************ 2025-05-19 22:04:46.685510 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.685518 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.685526 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.685552 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.685560 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.685567 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.685575 | orchestrator | 2025-05-19 22:04:46.685583 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.685595 | orchestrator | Monday 19 May 2025 21:54:46 +0000 (0:00:00.552) 0:01:08.885 ************ 2025-05-19 22:04:46.685603 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.685611 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.685619 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.685627 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.685639 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.685646 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.685654 | orchestrator | 2025-05-19 22:04:46.685662 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.685670 | orchestrator | Monday 19 May 2025 21:54:47 +0000 (0:00:00.748) 0:01:09.633 ************ 2025-05-19 22:04:46.685678 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.685686 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.685694 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.685701 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685709 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685717 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685725 | orchestrator | 2025-05-19 22:04:46.685733 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.685740 | orchestrator | Monday 19 May 2025 21:54:47 +0000 (0:00:00.588) 0:01:10.222 ************ 2025-05-19 22:04:46.685748 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.685756 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.685764 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.685772 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.685779 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.685787 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.685795 | orchestrator | 2025-05-19 22:04:46.685803 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-19 22:04:46.685811 | orchestrator | Monday 19 May 2025 21:54:48 +0000 (0:00:00.962) 0:01:11.185 ************ 2025-05-19 22:04:46.685818 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.685826 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.685834 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.685842 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.685849 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.685857 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.685865 | orchestrator | 2025-05-19 22:04:46.685873 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-19 22:04:46.685881 | orchestrator | Monday 19 May 2025 21:54:50 +0000 (0:00:01.423) 0:01:12.608 ************ 2025-05-19 22:04:46.685889 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.685896 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.685904 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.685912 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.685920 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.685927 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.685935 | orchestrator | 2025-05-19 22:04:46.685943 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-19 22:04:46.685951 | orchestrator | Monday 19 May 2025 21:54:51 +0000 (0:00:01.792) 0:01:14.401 ************ 2025-05-19 22:04:46.685964 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.685972 | orchestrator | 2025-05-19 22:04:46.685980 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-19 22:04:46.685988 | orchestrator | Monday 19 May 2025 21:54:52 +0000 (0:00:01.140) 0:01:15.542 ************ 2025-05-19 22:04:46.685995 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686003 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686011 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686052 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686061 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.686068 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.686076 | orchestrator | 2025-05-19 22:04:46.686084 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-19 22:04:46.686092 | orchestrator | Monday 19 May 2025 21:54:53 +0000 (0:00:00.770) 0:01:16.313 ************ 2025-05-19 22:04:46.686100 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686108 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686116 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686124 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686132 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.686140 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.686148 | orchestrator | 2025-05-19 22:04:46.686155 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-19 22:04:46.686163 | orchestrator | Monday 19 May 2025 21:54:54 +0000 (0:00:00.558) 0:01:16.871 ************ 2025-05-19 22:04:46.686171 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686179 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686187 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686195 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686203 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686211 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686218 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686226 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686234 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-19 22:04:46.686242 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686250 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686258 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-19 22:04:46.686266 | orchestrator | 2025-05-19 22:04:46.686278 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-19 22:04:46.686286 | orchestrator | Monday 19 May 2025 21:54:55 +0000 (0:00:01.491) 0:01:18.363 ************ 2025-05-19 22:04:46.686294 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.686307 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.686315 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.686323 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.686331 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.686339 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.686346 | orchestrator | 2025-05-19 22:04:46.686354 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-19 22:04:46.686362 | orchestrator | Monday 19 May 2025 21:54:56 +0000 (0:00:00.999) 0:01:19.362 ************ 2025-05-19 22:04:46.686370 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686383 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686391 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686399 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686407 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.686414 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.686422 | orchestrator | 2025-05-19 22:04:46.686430 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-19 22:04:46.686438 | orchestrator | Monday 19 May 2025 21:54:57 +0000 (0:00:00.827) 0:01:20.190 ************ 2025-05-19 22:04:46.686446 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686454 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686462 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686469 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686477 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.686485 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.686493 | orchestrator | 2025-05-19 22:04:46.686500 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-19 22:04:46.686508 | orchestrator | Monday 19 May 2025 21:54:58 +0000 (0:00:00.588) 0:01:20.778 ************ 2025-05-19 22:04:46.686516 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686524 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686688 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686707 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686715 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.686723 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.686731 | orchestrator | 2025-05-19 22:04:46.686740 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-19 22:04:46.686748 | orchestrator | Monday 19 May 2025 21:54:59 +0000 (0:00:00.871) 0:01:21.649 ************ 2025-05-19 22:04:46.686756 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.686764 | orchestrator | 2025-05-19 22:04:46.686772 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-19 22:04:46.686780 | orchestrator | Monday 19 May 2025 21:55:00 +0000 (0:00:01.228) 0:01:22.878 ************ 2025-05-19 22:04:46.686788 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.686796 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.686804 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.686811 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.686819 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.686827 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.686835 | orchestrator | 2025-05-19 22:04:46.686843 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-19 22:04:46.686851 | orchestrator | Monday 19 May 2025 21:56:14 +0000 (0:01:14.694) 0:02:37.572 ************ 2025-05-19 22:04:46.686858 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.686866 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.686874 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.686882 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.686890 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.686898 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.686906 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.686914 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.686922 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.686930 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.686937 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.686952 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.686958 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.686965 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.686972 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.686978 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.686985 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.686992 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.686998 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.687005 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687012 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-19 22:04:46.687018 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-19 22:04:46.687025 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-19 22:04:46.687049 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687056 | orchestrator | 2025-05-19 22:04:46.687062 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-19 22:04:46.687069 | orchestrator | Monday 19 May 2025 21:56:15 +0000 (0:00:00.818) 0:02:38.391 ************ 2025-05-19 22:04:46.687081 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687088 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687094 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687101 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687108 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687114 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687121 | orchestrator | 2025-05-19 22:04:46.687128 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-19 22:04:46.687134 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.460) 0:02:38.851 ************ 2025-05-19 22:04:46.687141 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687148 | orchestrator | 2025-05-19 22:04:46.687179 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-19 22:04:46.687186 | orchestrator | Monday 19 May 2025 21:56:16 +0000 (0:00:00.109) 0:02:38.961 ************ 2025-05-19 22:04:46.687193 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687200 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687206 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687213 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687220 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687226 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687233 | orchestrator | 2025-05-19 22:04:46.687240 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-19 22:04:46.687246 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:00.710) 0:02:39.671 ************ 2025-05-19 22:04:46.687253 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687260 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687267 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687273 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687280 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687287 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687293 | orchestrator | 2025-05-19 22:04:46.687300 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-19 22:04:46.687307 | orchestrator | Monday 19 May 2025 21:56:17 +0000 (0:00:00.869) 0:02:40.541 ************ 2025-05-19 22:04:46.687314 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687321 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687327 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687334 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687345 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687352 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687359 | orchestrator | 2025-05-19 22:04:46.687365 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-19 22:04:46.687372 | orchestrator | Monday 19 May 2025 21:56:18 +0000 (0:00:00.810) 0:02:41.351 ************ 2025-05-19 22:04:46.687379 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.687386 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.687392 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.687399 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.687406 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.687412 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.687419 | orchestrator | 2025-05-19 22:04:46.687426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-19 22:04:46.687433 | orchestrator | Monday 19 May 2025 21:56:21 +0000 (0:00:02.472) 0:02:43.823 ************ 2025-05-19 22:04:46.687440 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.687446 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.687453 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.687460 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.687466 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.687473 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.687479 | orchestrator | 2025-05-19 22:04:46.687486 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-19 22:04:46.687493 | orchestrator | Monday 19 May 2025 21:56:21 +0000 (0:00:00.655) 0:02:44.479 ************ 2025-05-19 22:04:46.687500 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.687508 | orchestrator | 2025-05-19 22:04:46.687515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-19 22:04:46.687522 | orchestrator | Monday 19 May 2025 21:56:23 +0000 (0:00:01.213) 0:02:45.692 ************ 2025-05-19 22:04:46.687553 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687566 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687573 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687580 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687587 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687593 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687600 | orchestrator | 2025-05-19 22:04:46.687607 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-19 22:04:46.687614 | orchestrator | Monday 19 May 2025 21:56:23 +0000 (0:00:00.675) 0:02:46.367 ************ 2025-05-19 22:04:46.687621 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687627 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687634 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687640 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687647 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687654 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687666 | orchestrator | 2025-05-19 22:04:46.687678 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-19 22:04:46.687690 | orchestrator | Monday 19 May 2025 21:56:24 +0000 (0:00:00.972) 0:02:47.339 ************ 2025-05-19 22:04:46.687703 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687710 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687724 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687730 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687737 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687743 | orchestrator | 2025-05-19 22:04:46.687750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-19 22:04:46.687763 | orchestrator | Monday 19 May 2025 21:56:25 +0000 (0:00:00.752) 0:02:48.092 ************ 2025-05-19 22:04:46.687770 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687786 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687793 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687799 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687806 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687813 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687819 | orchestrator | 2025-05-19 22:04:46.687826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-19 22:04:46.687833 | orchestrator | Monday 19 May 2025 21:56:26 +0000 (0:00:00.920) 0:02:49.012 ************ 2025-05-19 22:04:46.687839 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687846 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687852 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687859 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687866 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687872 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687879 | orchestrator | 2025-05-19 22:04:46.687886 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-19 22:04:46.687892 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:00.745) 0:02:49.758 ************ 2025-05-19 22:04:46.687899 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687905 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687912 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687919 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687925 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687932 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687938 | orchestrator | 2025-05-19 22:04:46.687945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-19 22:04:46.687951 | orchestrator | Monday 19 May 2025 21:56:27 +0000 (0:00:00.825) 0:02:50.584 ************ 2025-05-19 22:04:46.687958 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.687965 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.687971 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.687978 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.687985 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.687991 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.687998 | orchestrator | 2025-05-19 22:04:46.688005 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-19 22:04:46.688011 | orchestrator | Monday 19 May 2025 21:56:28 +0000 (0:00:00.566) 0:02:51.150 ************ 2025-05-19 22:04:46.688018 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.688024 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.688031 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.688038 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.688044 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.688051 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.688057 | orchestrator | 2025-05-19 22:04:46.688064 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-19 22:04:46.688071 | orchestrator | Monday 19 May 2025 21:56:29 +0000 (0:00:00.945) 0:02:52.095 ************ 2025-05-19 22:04:46.688077 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.688103 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.688110 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.688117 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.688123 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.688130 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.688136 | orchestrator | 2025-05-19 22:04:46.688143 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-19 22:04:46.688150 | orchestrator | Monday 19 May 2025 21:56:30 +0000 (0:00:01.227) 0:02:53.323 ************ 2025-05-19 22:04:46.688157 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.688164 | orchestrator | 2025-05-19 22:04:46.688170 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-19 22:04:46.688182 | orchestrator | Monday 19 May 2025 21:56:32 +0000 (0:00:01.414) 0:02:54.737 ************ 2025-05-19 22:04:46.688189 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-19 22:04:46.688196 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-19 22:04:46.688202 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-19 22:04:46.688209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688216 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-19 22:04:46.688222 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688229 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-19 22:04:46.688236 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688249 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-19 22:04:46.688255 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688262 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688269 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688275 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688282 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688289 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-19 22:04:46.688295 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688302 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688309 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688315 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688322 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688333 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688340 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-19 22:04:46.688346 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688356 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688363 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688370 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688377 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688383 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-19 22:04:46.688390 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688397 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688403 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688410 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688417 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-19 22:04:46.688423 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688430 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688443 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688450 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688457 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-19 22:04:46.688463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688491 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688498 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688505 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-19 22:04:46.688511 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688560 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688567 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-19 22:04:46.688573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688580 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688587 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688594 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688600 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-19 22:04:46.688613 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688620 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688627 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688633 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688640 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688646 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-19 22:04:46.688653 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688660 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688673 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688682 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688694 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-19 22:04:46.688705 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688718 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688743 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688755 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688766 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-19 22:04:46.688773 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688780 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688787 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688798 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-19 22:04:46.688805 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688816 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-19 22:04:46.688829 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688836 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-19 22:04:46.688843 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688850 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-19 22:04:46.688856 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-19 22:04:46.688863 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-19 22:04:46.688870 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-19 22:04:46.688876 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-19 22:04:46.688883 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-19 22:04:46.688890 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-19 22:04:46.688896 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-19 22:04:46.688903 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-19 22:04:46.688909 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-19 22:04:46.688916 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-19 22:04:46.688923 | orchestrator | 2025-05-19 22:04:46.688929 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-19 22:04:46.688936 | orchestrator | Monday 19 May 2025 21:56:38 +0000 (0:00:06.421) 0:03:01.159 ************ 2025-05-19 22:04:46.688943 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.688949 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.688956 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.688963 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.688971 | orchestrator | 2025-05-19 22:04:46.688977 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-19 22:04:46.688984 | orchestrator | Monday 19 May 2025 21:56:39 +0000 (0:00:00.907) 0:03:02.066 ************ 2025-05-19 22:04:46.688991 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.688998 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689004 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689011 | orchestrator | 2025-05-19 22:04:46.689018 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-19 22:04:46.689024 | orchestrator | Monday 19 May 2025 21:56:40 +0000 (0:00:00.730) 0:03:02.797 ************ 2025-05-19 22:04:46.689031 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689038 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689045 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689051 | orchestrator | 2025-05-19 22:04:46.689058 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-19 22:04:46.689065 | orchestrator | Monday 19 May 2025 21:56:41 +0000 (0:00:01.423) 0:03:04.220 ************ 2025-05-19 22:04:46.689071 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689078 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689085 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689091 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.689098 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.689104 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.689116 | orchestrator | 2025-05-19 22:04:46.689123 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-19 22:04:46.689130 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.559) 0:03:04.780 ************ 2025-05-19 22:04:46.689136 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689143 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689150 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689156 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.689163 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.689169 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.689176 | orchestrator | 2025-05-19 22:04:46.689183 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-19 22:04:46.689189 | orchestrator | Monday 19 May 2025 21:56:42 +0000 (0:00:00.689) 0:03:05.469 ************ 2025-05-19 22:04:46.689196 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689203 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689209 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689216 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689223 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689229 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689236 | orchestrator | 2025-05-19 22:04:46.689244 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-19 22:04:46.689251 | orchestrator | Monday 19 May 2025 21:56:43 +0000 (0:00:00.555) 0:03:06.024 ************ 2025-05-19 22:04:46.689258 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689266 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689277 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689284 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689292 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689299 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689306 | orchestrator | 2025-05-19 22:04:46.689317 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-19 22:04:46.689324 | orchestrator | Monday 19 May 2025 21:56:44 +0000 (0:00:00.924) 0:03:06.949 ************ 2025-05-19 22:04:46.689331 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689339 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689346 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689353 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689360 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689367 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689374 | orchestrator | 2025-05-19 22:04:46.689381 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-19 22:04:46.689389 | orchestrator | Monday 19 May 2025 21:56:44 +0000 (0:00:00.632) 0:03:07.582 ************ 2025-05-19 22:04:46.689396 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689403 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689410 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689417 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689425 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689432 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689439 | orchestrator | 2025-05-19 22:04:46.689446 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-19 22:04:46.689454 | orchestrator | Monday 19 May 2025 21:56:45 +0000 (0:00:00.939) 0:03:08.522 ************ 2025-05-19 22:04:46.689461 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689468 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689476 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689483 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689490 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689497 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689504 | orchestrator | 2025-05-19 22:04:46.689512 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-19 22:04:46.689525 | orchestrator | Monday 19 May 2025 21:56:46 +0000 (0:00:00.798) 0:03:09.320 ************ 2025-05-19 22:04:46.689564 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689571 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689578 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689585 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689593 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689600 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689607 | orchestrator | 2025-05-19 22:04:46.689614 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-19 22:04:46.689622 | orchestrator | Monday 19 May 2025 21:56:47 +0000 (0:00:00.837) 0:03:10.158 ************ 2025-05-19 22:04:46.689629 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689636 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689643 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689650 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.689658 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.689665 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.689672 | orchestrator | 2025-05-19 22:04:46.689679 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-19 22:04:46.689686 | orchestrator | Monday 19 May 2025 21:56:52 +0000 (0:00:04.447) 0:03:14.605 ************ 2025-05-19 22:04:46.689693 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689701 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689708 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689716 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.689728 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.689740 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.689754 | orchestrator | 2025-05-19 22:04:46.689768 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-19 22:04:46.689780 | orchestrator | Monday 19 May 2025 21:56:53 +0000 (0:00:01.024) 0:03:15.630 ************ 2025-05-19 22:04:46.689792 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689800 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689808 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689815 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.689822 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.689829 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.689836 | orchestrator | 2025-05-19 22:04:46.689843 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-19 22:04:46.689850 | orchestrator | Monday 19 May 2025 21:56:53 +0000 (0:00:00.760) 0:03:16.391 ************ 2025-05-19 22:04:46.689858 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689865 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689872 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689879 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.689886 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.689893 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.689900 | orchestrator | 2025-05-19 22:04:46.689907 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-19 22:04:46.689914 | orchestrator | Monday 19 May 2025 21:56:54 +0000 (0:00:00.988) 0:03:17.380 ************ 2025-05-19 22:04:46.689922 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.689929 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.689936 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.689943 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689950 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689958 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.689965 | orchestrator | 2025-05-19 22:04:46.689979 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-19 22:04:46.689991 | orchestrator | Monday 19 May 2025 21:56:55 +0000 (0:00:00.741) 0:03:18.122 ************ 2025-05-19 22:04:46.689998 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690005 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690012 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690052 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-19 22:04:46.690062 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-19 22:04:46.690070 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690077 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-19 22:04:46.690085 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-19 22:04:46.690092 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690100 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-19 22:04:46.690107 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-19 22:04:46.690115 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690122 | orchestrator | 2025-05-19 22:04:46.690129 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-19 22:04:46.690137 | orchestrator | Monday 19 May 2025 21:56:56 +0000 (0:00:01.037) 0:03:19.159 ************ 2025-05-19 22:04:46.690144 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690151 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690159 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690166 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690173 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690180 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690187 | orchestrator | 2025-05-19 22:04:46.690195 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-19 22:04:46.690202 | orchestrator | Monday 19 May 2025 21:56:57 +0000 (0:00:00.770) 0:03:19.930 ************ 2025-05-19 22:04:46.690209 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690216 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690224 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690231 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690238 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690245 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690252 | orchestrator | 2025-05-19 22:04:46.690259 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 22:04:46.690272 | orchestrator | Monday 19 May 2025 21:56:58 +0000 (0:00:00.913) 0:03:20.843 ************ 2025-05-19 22:04:46.690279 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690286 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690294 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690301 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690308 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690315 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690322 | orchestrator | 2025-05-19 22:04:46.690330 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 22:04:46.690337 | orchestrator | Monday 19 May 2025 21:56:59 +0000 (0:00:00.925) 0:03:21.769 ************ 2025-05-19 22:04:46.690344 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690351 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690358 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690365 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690373 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690380 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690387 | orchestrator | 2025-05-19 22:04:46.690394 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 22:04:46.690401 | orchestrator | Monday 19 May 2025 21:57:00 +0000 (0:00:01.240) 0:03:23.009 ************ 2025-05-19 22:04:46.690409 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690416 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690423 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690435 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.690442 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.690449 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.690457 | orchestrator | 2025-05-19 22:04:46.690464 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 22:04:46.690475 | orchestrator | Monday 19 May 2025 21:57:01 +0000 (0:00:00.685) 0:03:23.695 ************ 2025-05-19 22:04:46.690483 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690490 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690497 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690504 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.690511 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.690519 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.690577 | orchestrator | 2025-05-19 22:04:46.690589 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 22:04:46.690597 | orchestrator | Monday 19 May 2025 21:57:01 +0000 (0:00:00.875) 0:03:24.571 ************ 2025-05-19 22:04:46.690604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 22:04:46.690611 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 22:04:46.690619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 22:04:46.690626 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690633 | orchestrator | 2025-05-19 22:04:46.690640 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 22:04:46.690648 | orchestrator | Monday 19 May 2025 21:57:02 +0000 (0:00:00.436) 0:03:25.007 ************ 2025-05-19 22:04:46.690655 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 22:04:46.690662 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 22:04:46.690669 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 22:04:46.690676 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690683 | orchestrator | 2025-05-19 22:04:46.690691 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 22:04:46.690698 | orchestrator | Monday 19 May 2025 21:57:02 +0000 (0:00:00.368) 0:03:25.375 ************ 2025-05-19 22:04:46.690706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-19 22:04:46.690718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-19 22:04:46.690726 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-19 22:04:46.690733 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690740 | orchestrator | 2025-05-19 22:04:46.690753 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 22:04:46.690766 | orchestrator | Monday 19 May 2025 21:57:03 +0000 (0:00:00.341) 0:03:25.717 ************ 2025-05-19 22:04:46.690779 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690791 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690804 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690816 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.690828 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.690840 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.690848 | orchestrator | 2025-05-19 22:04:46.690855 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 22:04:46.690862 | orchestrator | Monday 19 May 2025 21:57:03 +0000 (0:00:00.616) 0:03:26.334 ************ 2025-05-19 22:04:46.690870 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-19 22:04:46.690877 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.690884 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-19 22:04:46.690892 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.690899 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-19 22:04:46.690906 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.690913 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 22:04:46.690921 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 22:04:46.690928 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 22:04:46.690935 | orchestrator | 2025-05-19 22:04:46.690942 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-19 22:04:46.690949 | orchestrator | Monday 19 May 2025 21:57:05 +0000 (0:00:01.985) 0:03:28.320 ************ 2025-05-19 22:04:46.690957 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.690964 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.690971 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.690978 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.690986 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.690993 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.691000 | orchestrator | 2025-05-19 22:04:46.691007 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.691015 | orchestrator | Monday 19 May 2025 21:57:09 +0000 (0:00:03.575) 0:03:31.895 ************ 2025-05-19 22:04:46.691022 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.691029 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.691036 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.691043 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.691051 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.691058 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.691065 | orchestrator | 2025-05-19 22:04:46.691072 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-19 22:04:46.691079 | orchestrator | Monday 19 May 2025 21:57:10 +0000 (0:00:01.355) 0:03:33.250 ************ 2025-05-19 22:04:46.691087 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691094 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.691101 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.691109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.691118 | orchestrator | 2025-05-19 22:04:46.691127 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-19 22:04:46.691135 | orchestrator | Monday 19 May 2025 21:57:12 +0000 (0:00:01.483) 0:03:34.734 ************ 2025-05-19 22:04:46.691144 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.691153 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.691161 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.691179 | orchestrator | 2025-05-19 22:04:46.691188 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-19 22:04:46.691203 | orchestrator | Monday 19 May 2025 21:57:12 +0000 (0:00:00.380) 0:03:35.115 ************ 2025-05-19 22:04:46.691212 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.691221 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.691229 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.691238 | orchestrator | 2025-05-19 22:04:46.691252 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-19 22:04:46.691260 | orchestrator | Monday 19 May 2025 21:57:14 +0000 (0:00:01.806) 0:03:36.921 ************ 2025-05-19 22:04:46.691269 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.691278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.691286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.691295 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.691304 | orchestrator | 2025-05-19 22:04:46.691312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-19 22:04:46.691321 | orchestrator | Monday 19 May 2025 21:57:15 +0000 (0:00:00.718) 0:03:37.639 ************ 2025-05-19 22:04:46.691330 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.691339 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.691347 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.691356 | orchestrator | 2025-05-19 22:04:46.691365 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-19 22:04:46.691373 | orchestrator | Monday 19 May 2025 21:57:15 +0000 (0:00:00.388) 0:03:38.028 ************ 2025-05-19 22:04:46.691382 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.691390 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.691399 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.691408 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.691416 | orchestrator | 2025-05-19 22:04:46.691425 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-19 22:04:46.691434 | orchestrator | Monday 19 May 2025 21:57:16 +0000 (0:00:01.087) 0:03:39.115 ************ 2025-05-19 22:04:46.691442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.691451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.691459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.691468 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691476 | orchestrator | 2025-05-19 22:04:46.691485 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-19 22:04:46.691494 | orchestrator | Monday 19 May 2025 21:57:16 +0000 (0:00:00.441) 0:03:39.557 ************ 2025-05-19 22:04:46.691502 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691511 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.691520 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.691553 | orchestrator | 2025-05-19 22:04:46.691565 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-19 22:04:46.691574 | orchestrator | Monday 19 May 2025 21:57:17 +0000 (0:00:00.379) 0:03:39.937 ************ 2025-05-19 22:04:46.691582 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691591 | orchestrator | 2025-05-19 22:04:46.691600 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-19 22:04:46.691608 | orchestrator | Monday 19 May 2025 21:57:17 +0000 (0:00:00.302) 0:03:40.239 ************ 2025-05-19 22:04:46.691617 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691626 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.691634 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.691643 | orchestrator | 2025-05-19 22:04:46.691652 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-19 22:04:46.691660 | orchestrator | Monday 19 May 2025 21:57:17 +0000 (0:00:00.340) 0:03:40.580 ************ 2025-05-19 22:04:46.691675 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691684 | orchestrator | 2025-05-19 22:04:46.691693 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-19 22:04:46.691701 | orchestrator | Monday 19 May 2025 21:57:18 +0000 (0:00:00.227) 0:03:40.807 ************ 2025-05-19 22:04:46.691710 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691719 | orchestrator | 2025-05-19 22:04:46.691728 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-19 22:04:46.691736 | orchestrator | Monday 19 May 2025 21:57:18 +0000 (0:00:00.214) 0:03:41.022 ************ 2025-05-19 22:04:46.691745 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691753 | orchestrator | 2025-05-19 22:04:46.691762 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-19 22:04:46.691772 | orchestrator | Monday 19 May 2025 21:57:18 +0000 (0:00:00.416) 0:03:41.439 ************ 2025-05-19 22:04:46.691787 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691803 | orchestrator | 2025-05-19 22:04:46.691820 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-19 22:04:46.691831 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:00.234) 0:03:41.674 ************ 2025-05-19 22:04:46.691839 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691848 | orchestrator | 2025-05-19 22:04:46.691857 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-19 22:04:46.691866 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:00.242) 0:03:41.916 ************ 2025-05-19 22:04:46.691874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.691883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.691892 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.691900 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691909 | orchestrator | 2025-05-19 22:04:46.691918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-19 22:04:46.691926 | orchestrator | Monday 19 May 2025 21:57:19 +0000 (0:00:00.408) 0:03:42.324 ************ 2025-05-19 22:04:46.691935 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.691944 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.691952 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.691961 | orchestrator | 2025-05-19 22:04:46.691975 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-19 22:04:46.691984 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.359) 0:03:42.684 ************ 2025-05-19 22:04:46.691993 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692002 | orchestrator | 2025-05-19 22:04:46.692016 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-19 22:04:46.692025 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.254) 0:03:42.939 ************ 2025-05-19 22:04:46.692034 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692042 | orchestrator | 2025-05-19 22:04:46.692051 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-19 22:04:46.692060 | orchestrator | Monday 19 May 2025 21:57:20 +0000 (0:00:00.245) 0:03:43.185 ************ 2025-05-19 22:04:46.692069 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.692078 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.692086 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.692095 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.692104 | orchestrator | 2025-05-19 22:04:46.692113 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-19 22:04:46.692121 | orchestrator | Monday 19 May 2025 21:57:21 +0000 (0:00:01.250) 0:03:44.435 ************ 2025-05-19 22:04:46.692130 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.692139 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.692148 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.692163 | orchestrator | 2025-05-19 22:04:46.692171 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-19 22:04:46.692180 | orchestrator | Monday 19 May 2025 21:57:22 +0000 (0:00:00.367) 0:03:44.803 ************ 2025-05-19 22:04:46.692189 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.692198 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.692206 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.692215 | orchestrator | 2025-05-19 22:04:46.692224 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-19 22:04:46.692232 | orchestrator | Monday 19 May 2025 21:57:23 +0000 (0:00:01.437) 0:03:46.241 ************ 2025-05-19 22:04:46.692241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.692250 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.692258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.692267 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692276 | orchestrator | 2025-05-19 22:04:46.692284 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-19 22:04:46.692293 | orchestrator | Monday 19 May 2025 21:57:24 +0000 (0:00:01.197) 0:03:47.438 ************ 2025-05-19 22:04:46.692302 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.692310 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.692319 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.692328 | orchestrator | 2025-05-19 22:04:46.692337 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-19 22:04:46.692345 | orchestrator | Monday 19 May 2025 21:57:25 +0000 (0:00:00.363) 0:03:47.802 ************ 2025-05-19 22:04:46.692354 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.692363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.692372 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.692380 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.692389 | orchestrator | 2025-05-19 22:04:46.692398 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-19 22:04:46.692406 | orchestrator | Monday 19 May 2025 21:57:26 +0000 (0:00:01.100) 0:03:48.902 ************ 2025-05-19 22:04:46.692415 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.692424 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.692433 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.692441 | orchestrator | 2025-05-19 22:04:46.692450 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-19 22:04:46.692459 | orchestrator | Monday 19 May 2025 21:57:26 +0000 (0:00:00.376) 0:03:49.279 ************ 2025-05-19 22:04:46.692468 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.692476 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.692485 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.692494 | orchestrator | 2025-05-19 22:04:46.692502 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-19 22:04:46.692511 | orchestrator | Monday 19 May 2025 21:57:27 +0000 (0:00:01.226) 0:03:50.505 ************ 2025-05-19 22:04:46.692520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.692571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.692582 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.692591 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692599 | orchestrator | 2025-05-19 22:04:46.692608 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-19 22:04:46.692617 | orchestrator | Monday 19 May 2025 21:57:28 +0000 (0:00:00.931) 0:03:51.437 ************ 2025-05-19 22:04:46.692625 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.692634 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.692643 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.692651 | orchestrator | 2025-05-19 22:04:46.692660 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-19 22:04:46.692675 | orchestrator | Monday 19 May 2025 21:57:29 +0000 (0:00:00.400) 0:03:51.838 ************ 2025-05-19 22:04:46.692684 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.692692 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.692701 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.692709 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692718 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.692726 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.692735 | orchestrator | 2025-05-19 22:04:46.692744 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-19 22:04:46.692753 | orchestrator | Monday 19 May 2025 21:57:30 +0000 (0:00:00.912) 0:03:52.750 ************ 2025-05-19 22:04:46.692767 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.692776 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.692784 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.692799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.692815 | orchestrator | 2025-05-19 22:04:46.692829 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-19 22:04:46.692844 | orchestrator | Monday 19 May 2025 21:57:31 +0000 (0:00:01.120) 0:03:53.870 ************ 2025-05-19 22:04:46.692859 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.692873 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.692886 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.692898 | orchestrator | 2025-05-19 22:04:46.692906 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-19 22:04:46.692914 | orchestrator | Monday 19 May 2025 21:57:31 +0000 (0:00:00.357) 0:03:54.228 ************ 2025-05-19 22:04:46.692921 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.692929 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.692937 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.692945 | orchestrator | 2025-05-19 22:04:46.692953 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-19 22:04:46.692961 | orchestrator | Monday 19 May 2025 21:57:32 +0000 (0:00:01.355) 0:03:55.583 ************ 2025-05-19 22:04:46.692969 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.692977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.692984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.692992 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693000 | orchestrator | 2025-05-19 22:04:46.693008 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-19 22:04:46.693016 | orchestrator | Monday 19 May 2025 21:57:33 +0000 (0:00:00.862) 0:03:56.445 ************ 2025-05-19 22:04:46.693024 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693032 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693040 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693048 | orchestrator | 2025-05-19 22:04:46.693056 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-19 22:04:46.693063 | orchestrator | 2025-05-19 22:04:46.693071 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.693079 | orchestrator | Monday 19 May 2025 21:57:34 +0000 (0:00:01.000) 0:03:57.446 ************ 2025-05-19 22:04:46.693087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.693095 | orchestrator | 2025-05-19 22:04:46.693103 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.693111 | orchestrator | Monday 19 May 2025 21:57:35 +0000 (0:00:00.588) 0:03:58.034 ************ 2025-05-19 22:04:46.693119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.693127 | orchestrator | 2025-05-19 22:04:46.693135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.693149 | orchestrator | Monday 19 May 2025 21:57:36 +0000 (0:00:00.863) 0:03:58.898 ************ 2025-05-19 22:04:46.693157 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693165 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693173 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693181 | orchestrator | 2025-05-19 22:04:46.693189 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.693197 | orchestrator | Monday 19 May 2025 21:57:37 +0000 (0:00:00.766) 0:03:59.665 ************ 2025-05-19 22:04:46.693205 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693213 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693221 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693229 | orchestrator | 2025-05-19 22:04:46.693237 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.693245 | orchestrator | Monday 19 May 2025 21:57:37 +0000 (0:00:00.322) 0:03:59.987 ************ 2025-05-19 22:04:46.693253 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693260 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693268 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693276 | orchestrator | 2025-05-19 22:04:46.693284 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.693292 | orchestrator | Monday 19 May 2025 21:57:37 +0000 (0:00:00.307) 0:04:00.295 ************ 2025-05-19 22:04:46.693300 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693308 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693316 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693323 | orchestrator | 2025-05-19 22:04:46.693331 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.693339 | orchestrator | Monday 19 May 2025 21:57:38 +0000 (0:00:00.577) 0:04:00.873 ************ 2025-05-19 22:04:46.693347 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693355 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693363 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693371 | orchestrator | 2025-05-19 22:04:46.693379 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.693387 | orchestrator | Monday 19 May 2025 21:57:38 +0000 (0:00:00.663) 0:04:01.537 ************ 2025-05-19 22:04:46.693395 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693402 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693410 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693418 | orchestrator | 2025-05-19 22:04:46.693426 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.693434 | orchestrator | Monday 19 May 2025 21:57:39 +0000 (0:00:00.334) 0:04:01.872 ************ 2025-05-19 22:04:46.693442 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693449 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693457 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693465 | orchestrator | 2025-05-19 22:04:46.693473 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.693486 | orchestrator | Monday 19 May 2025 21:57:39 +0000 (0:00:00.273) 0:04:02.145 ************ 2025-05-19 22:04:46.693494 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693502 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693510 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693518 | orchestrator | 2025-05-19 22:04:46.693549 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.693559 | orchestrator | Monday 19 May 2025 21:57:40 +0000 (0:00:00.903) 0:04:03.049 ************ 2025-05-19 22:04:46.693567 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693575 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693583 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693591 | orchestrator | 2025-05-19 22:04:46.693599 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.693606 | orchestrator | Monday 19 May 2025 21:57:41 +0000 (0:00:00.700) 0:04:03.749 ************ 2025-05-19 22:04:46.693619 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693627 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693636 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693649 | orchestrator | 2025-05-19 22:04:46.693662 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.693670 | orchestrator | Monday 19 May 2025 21:57:41 +0000 (0:00:00.273) 0:04:04.022 ************ 2025-05-19 22:04:46.693678 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.693686 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.693694 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.693702 | orchestrator | 2025-05-19 22:04:46.693710 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.693718 | orchestrator | Monday 19 May 2025 21:57:41 +0000 (0:00:00.292) 0:04:04.315 ************ 2025-05-19 22:04:46.693725 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693734 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693747 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693759 | orchestrator | 2025-05-19 22:04:46.693776 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.693794 | orchestrator | Monday 19 May 2025 21:57:42 +0000 (0:00:00.465) 0:04:04.781 ************ 2025-05-19 22:04:46.693807 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693819 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693832 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693844 | orchestrator | 2025-05-19 22:04:46.693857 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.693870 | orchestrator | Monday 19 May 2025 21:57:42 +0000 (0:00:00.358) 0:04:05.139 ************ 2025-05-19 22:04:46.693884 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693898 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693910 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693923 | orchestrator | 2025-05-19 22:04:46.693931 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.693939 | orchestrator | Monday 19 May 2025 21:57:42 +0000 (0:00:00.338) 0:04:05.477 ************ 2025-05-19 22:04:46.693947 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.693955 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.693963 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.693970 | orchestrator | 2025-05-19 22:04:46.693978 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.693986 | orchestrator | Monday 19 May 2025 21:57:43 +0000 (0:00:00.283) 0:04:05.761 ************ 2025-05-19 22:04:46.693994 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.694002 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.694010 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.694146 | orchestrator | 2025-05-19 22:04:46.694158 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.694166 | orchestrator | Monday 19 May 2025 21:57:43 +0000 (0:00:00.519) 0:04:06.281 ************ 2025-05-19 22:04:46.694174 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694182 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694191 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694198 | orchestrator | 2025-05-19 22:04:46.694206 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.694214 | orchestrator | Monday 19 May 2025 21:57:43 +0000 (0:00:00.313) 0:04:06.594 ************ 2025-05-19 22:04:46.694222 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694230 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694238 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694246 | orchestrator | 2025-05-19 22:04:46.694253 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.694261 | orchestrator | Monday 19 May 2025 21:57:44 +0000 (0:00:00.244) 0:04:06.839 ************ 2025-05-19 22:04:46.694277 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694285 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694293 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694300 | orchestrator | 2025-05-19 22:04:46.694309 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-19 22:04:46.694317 | orchestrator | Monday 19 May 2025 21:57:44 +0000 (0:00:00.680) 0:04:07.519 ************ 2025-05-19 22:04:46.694325 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694333 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694340 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694348 | orchestrator | 2025-05-19 22:04:46.694356 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-19 22:04:46.694364 | orchestrator | Monday 19 May 2025 21:57:45 +0000 (0:00:00.317) 0:04:07.836 ************ 2025-05-19 22:04:46.694372 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.694380 | orchestrator | 2025-05-19 22:04:46.694388 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-19 22:04:46.694396 | orchestrator | Monday 19 May 2025 21:57:45 +0000 (0:00:00.550) 0:04:08.387 ************ 2025-05-19 22:04:46.694404 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.694412 | orchestrator | 2025-05-19 22:04:46.694420 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-19 22:04:46.694428 | orchestrator | Monday 19 May 2025 21:57:45 +0000 (0:00:00.147) 0:04:08.534 ************ 2025-05-19 22:04:46.694436 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-19 22:04:46.694444 | orchestrator | 2025-05-19 22:04:46.694484 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-19 22:04:46.694494 | orchestrator | Monday 19 May 2025 21:57:47 +0000 (0:00:01.486) 0:04:10.020 ************ 2025-05-19 22:04:46.694502 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694515 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694523 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694554 | orchestrator | 2025-05-19 22:04:46.694562 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-19 22:04:46.694570 | orchestrator | Monday 19 May 2025 21:57:47 +0000 (0:00:00.333) 0:04:10.354 ************ 2025-05-19 22:04:46.694578 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694586 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694594 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694602 | orchestrator | 2025-05-19 22:04:46.694623 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-19 22:04:46.694632 | orchestrator | Monday 19 May 2025 21:57:48 +0000 (0:00:00.382) 0:04:10.737 ************ 2025-05-19 22:04:46.694640 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.694648 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.694656 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.694663 | orchestrator | 2025-05-19 22:04:46.694671 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-19 22:04:46.694679 | orchestrator | Monday 19 May 2025 21:57:49 +0000 (0:00:01.350) 0:04:12.087 ************ 2025-05-19 22:04:46.694687 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.694695 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.694703 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.694711 | orchestrator | 2025-05-19 22:04:46.694719 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-19 22:04:46.694727 | orchestrator | Monday 19 May 2025 21:57:50 +0000 (0:00:01.213) 0:04:13.300 ************ 2025-05-19 22:04:46.694735 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.694743 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.694751 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.694759 | orchestrator | 2025-05-19 22:04:46.694767 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-19 22:04:46.694775 | orchestrator | Monday 19 May 2025 21:57:51 +0000 (0:00:00.705) 0:04:14.006 ************ 2025-05-19 22:04:46.694789 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694797 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.694805 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.694813 | orchestrator | 2025-05-19 22:04:46.694821 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-19 22:04:46.694829 | orchestrator | Monday 19 May 2025 21:57:52 +0000 (0:00:00.725) 0:04:14.731 ************ 2025-05-19 22:04:46.694837 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.694845 | orchestrator | 2025-05-19 22:04:46.694853 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-19 22:04:46.694861 | orchestrator | Monday 19 May 2025 21:57:53 +0000 (0:00:01.357) 0:04:16.089 ************ 2025-05-19 22:04:46.694869 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.694877 | orchestrator | 2025-05-19 22:04:46.694886 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-19 22:04:46.694894 | orchestrator | Monday 19 May 2025 21:57:54 +0000 (0:00:00.769) 0:04:16.859 ************ 2025-05-19 22:04:46.694902 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.694910 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.694918 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.694926 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:04:46.694934 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-19 22:04:46.694942 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:04:46.694950 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:04:46.694958 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-19 22:04:46.694966 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:04:46.694975 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-19 22:04:46.694983 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-19 22:04:46.694991 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-19 22:04:46.694999 | orchestrator | 2025-05-19 22:04:46.695007 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-19 22:04:46.695015 | orchestrator | Monday 19 May 2025 21:57:57 +0000 (0:00:03.709) 0:04:20.568 ************ 2025-05-19 22:04:46.695023 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695031 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695039 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695047 | orchestrator | 2025-05-19 22:04:46.695055 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-19 22:04:46.695063 | orchestrator | Monday 19 May 2025 21:57:59 +0000 (0:00:01.661) 0:04:22.229 ************ 2025-05-19 22:04:46.695075 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.695088 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.695100 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.695112 | orchestrator | 2025-05-19 22:04:46.695125 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-19 22:04:46.695138 | orchestrator | Monday 19 May 2025 21:58:00 +0000 (0:00:00.524) 0:04:22.754 ************ 2025-05-19 22:04:46.695150 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.695163 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.695176 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.695190 | orchestrator | 2025-05-19 22:04:46.695204 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-19 22:04:46.695216 | orchestrator | Monday 19 May 2025 21:58:00 +0000 (0:00:00.457) 0:04:23.211 ************ 2025-05-19 22:04:46.695228 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695236 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695244 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695252 | orchestrator | 2025-05-19 22:04:46.695260 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-19 22:04:46.695308 | orchestrator | Monday 19 May 2025 21:58:03 +0000 (0:00:02.412) 0:04:25.623 ************ 2025-05-19 22:04:46.695318 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695326 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695334 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695342 | orchestrator | 2025-05-19 22:04:46.695355 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-19 22:04:46.695363 | orchestrator | Monday 19 May 2025 21:58:04 +0000 (0:00:01.798) 0:04:27.422 ************ 2025-05-19 22:04:46.695371 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.695379 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.695387 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.695395 | orchestrator | 2025-05-19 22:04:46.695403 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-19 22:04:46.695411 | orchestrator | Monday 19 May 2025 21:58:05 +0000 (0:00:00.480) 0:04:27.902 ************ 2025-05-19 22:04:46.695419 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.695427 | orchestrator | 2025-05-19 22:04:46.695435 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-19 22:04:46.695443 | orchestrator | Monday 19 May 2025 21:58:05 +0000 (0:00:00.638) 0:04:28.540 ************ 2025-05-19 22:04:46.695451 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.695458 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.695466 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.695474 | orchestrator | 2025-05-19 22:04:46.695482 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-19 22:04:46.695490 | orchestrator | Monday 19 May 2025 21:58:06 +0000 (0:00:00.750) 0:04:29.291 ************ 2025-05-19 22:04:46.695498 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.695506 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.695514 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.695521 | orchestrator | 2025-05-19 22:04:46.695580 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-19 22:04:46.695590 | orchestrator | Monday 19 May 2025 21:58:07 +0000 (0:00:00.436) 0:04:29.727 ************ 2025-05-19 22:04:46.695598 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.695606 | orchestrator | 2025-05-19 22:04:46.695614 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-19 22:04:46.695622 | orchestrator | Monday 19 May 2025 21:58:07 +0000 (0:00:00.673) 0:04:30.401 ************ 2025-05-19 22:04:46.695630 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695637 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695644 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695651 | orchestrator | 2025-05-19 22:04:46.695657 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-19 22:04:46.695664 | orchestrator | Monday 19 May 2025 21:58:10 +0000 (0:00:02.555) 0:04:32.956 ************ 2025-05-19 22:04:46.695671 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695677 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695684 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695690 | orchestrator | 2025-05-19 22:04:46.695697 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-19 22:04:46.695704 | orchestrator | Monday 19 May 2025 21:58:11 +0000 (0:00:01.214) 0:04:34.170 ************ 2025-05-19 22:04:46.695710 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695717 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695725 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695736 | orchestrator | 2025-05-19 22:04:46.695747 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-19 22:04:46.695758 | orchestrator | Monday 19 May 2025 21:58:13 +0000 (0:00:01.809) 0:04:35.979 ************ 2025-05-19 22:04:46.695768 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.695792 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.695803 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.695814 | orchestrator | 2025-05-19 22:04:46.695825 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-19 22:04:46.695833 | orchestrator | Monday 19 May 2025 21:58:15 +0000 (0:00:02.041) 0:04:38.020 ************ 2025-05-19 22:04:46.695840 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.695846 | orchestrator | 2025-05-19 22:04:46.695853 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-19 22:04:46.695860 | orchestrator | Monday 19 May 2025 21:58:16 +0000 (0:00:00.874) 0:04:38.895 ************ 2025-05-19 22:04:46.695866 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-19 22:04:46.695873 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.695879 | orchestrator | 2025-05-19 22:04:46.695886 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-19 22:04:46.695893 | orchestrator | Monday 19 May 2025 21:58:38 +0000 (0:00:21.873) 0:05:00.768 ************ 2025-05-19 22:04:46.695900 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.695906 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.695913 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.695919 | orchestrator | 2025-05-19 22:04:46.695926 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-19 22:04:46.695933 | orchestrator | Monday 19 May 2025 21:58:47 +0000 (0:00:09.112) 0:05:09.881 ************ 2025-05-19 22:04:46.695939 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.695946 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.695953 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.695959 | orchestrator | 2025-05-19 22:04:46.695966 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-19 22:04:46.695973 | orchestrator | Monday 19 May 2025 21:58:47 +0000 (0:00:00.635) 0:05:10.516 ************ 2025-05-19 22:04:46.696013 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-19 22:04:46.696025 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-19 22:04:46.696033 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-19 22:04:46.696042 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-19 22:04:46.696049 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-19 22:04:46.696062 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b5b88c94b79a0d0c4bbd4293666105265302690f'}])  2025-05-19 22:04:46.696070 | orchestrator | 2025-05-19 22:04:46.696077 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.696084 | orchestrator | Monday 19 May 2025 21:59:01 +0000 (0:00:14.027) 0:05:24.544 ************ 2025-05-19 22:04:46.696090 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696097 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696104 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696110 | orchestrator | 2025-05-19 22:04:46.696117 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-19 22:04:46.696124 | orchestrator | Monday 19 May 2025 21:59:02 +0000 (0:00:00.366) 0:05:24.910 ************ 2025-05-19 22:04:46.696131 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.696137 | orchestrator | 2025-05-19 22:04:46.696144 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-19 22:04:46.696151 | orchestrator | Monday 19 May 2025 21:59:03 +0000 (0:00:00.818) 0:05:25.728 ************ 2025-05-19 22:04:46.696158 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696164 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696171 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696178 | orchestrator | 2025-05-19 22:04:46.696185 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-19 22:04:46.696192 | orchestrator | Monday 19 May 2025 21:59:03 +0000 (0:00:00.339) 0:05:26.068 ************ 2025-05-19 22:04:46.696199 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696205 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696212 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696219 | orchestrator | 2025-05-19 22:04:46.696225 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-19 22:04:46.696232 | orchestrator | Monday 19 May 2025 21:59:03 +0000 (0:00:00.344) 0:05:26.413 ************ 2025-05-19 22:04:46.696239 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.696246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.696252 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.696259 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696265 | orchestrator | 2025-05-19 22:04:46.696272 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-19 22:04:46.696279 | orchestrator | Monday 19 May 2025 21:59:04 +0000 (0:00:00.973) 0:05:27.386 ************ 2025-05-19 22:04:46.696286 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696292 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696299 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696306 | orchestrator | 2025-05-19 22:04:46.696312 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-19 22:04:46.696319 | orchestrator | 2025-05-19 22:04:46.696326 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.696352 | orchestrator | Monday 19 May 2025 21:59:05 +0000 (0:00:00.929) 0:05:28.316 ************ 2025-05-19 22:04:46.696364 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.696371 | orchestrator | 2025-05-19 22:04:46.696378 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.696385 | orchestrator | Monday 19 May 2025 21:59:06 +0000 (0:00:00.498) 0:05:28.814 ************ 2025-05-19 22:04:46.696396 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.696403 | orchestrator | 2025-05-19 22:04:46.696410 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.696417 | orchestrator | Monday 19 May 2025 21:59:07 +0000 (0:00:00.805) 0:05:29.620 ************ 2025-05-19 22:04:46.696424 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696430 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696437 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696444 | orchestrator | 2025-05-19 22:04:46.696451 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.696457 | orchestrator | Monday 19 May 2025 21:59:07 +0000 (0:00:00.693) 0:05:30.314 ************ 2025-05-19 22:04:46.696464 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696471 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696477 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696484 | orchestrator | 2025-05-19 22:04:46.696490 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.696497 | orchestrator | Monday 19 May 2025 21:59:08 +0000 (0:00:00.310) 0:05:30.624 ************ 2025-05-19 22:04:46.696504 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696511 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696517 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696524 | orchestrator | 2025-05-19 22:04:46.696549 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.696557 | orchestrator | Monday 19 May 2025 21:59:08 +0000 (0:00:00.561) 0:05:31.186 ************ 2025-05-19 22:04:46.696563 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696570 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696577 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696583 | orchestrator | 2025-05-19 22:04:46.696602 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.696610 | orchestrator | Monday 19 May 2025 21:59:08 +0000 (0:00:00.306) 0:05:31.493 ************ 2025-05-19 22:04:46.696617 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696624 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696630 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696637 | orchestrator | 2025-05-19 22:04:46.696644 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.696651 | orchestrator | Monday 19 May 2025 21:59:09 +0000 (0:00:00.677) 0:05:32.170 ************ 2025-05-19 22:04:46.696657 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696664 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696671 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696678 | orchestrator | 2025-05-19 22:04:46.696684 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.696691 | orchestrator | Monday 19 May 2025 21:59:09 +0000 (0:00:00.329) 0:05:32.499 ************ 2025-05-19 22:04:46.696698 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696705 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696712 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696718 | orchestrator | 2025-05-19 22:04:46.696725 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.696732 | orchestrator | Monday 19 May 2025 21:59:10 +0000 (0:00:00.591) 0:05:33.091 ************ 2025-05-19 22:04:46.696739 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696746 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696752 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696759 | orchestrator | 2025-05-19 22:04:46.696766 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.696772 | orchestrator | Monday 19 May 2025 21:59:11 +0000 (0:00:00.727) 0:05:33.819 ************ 2025-05-19 22:04:46.696780 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696793 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696799 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696806 | orchestrator | 2025-05-19 22:04:46.696813 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.696820 | orchestrator | Monday 19 May 2025 21:59:12 +0000 (0:00:00.866) 0:05:34.686 ************ 2025-05-19 22:04:46.696826 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696833 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696840 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696846 | orchestrator | 2025-05-19 22:04:46.696853 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.696860 | orchestrator | Monday 19 May 2025 21:59:12 +0000 (0:00:00.306) 0:05:34.992 ************ 2025-05-19 22:04:46.696867 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.696874 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.696880 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.696887 | orchestrator | 2025-05-19 22:04:46.696894 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.696901 | orchestrator | Monday 19 May 2025 21:59:12 +0000 (0:00:00.583) 0:05:35.575 ************ 2025-05-19 22:04:46.696907 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696914 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696921 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696927 | orchestrator | 2025-05-19 22:04:46.696934 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.696941 | orchestrator | Monday 19 May 2025 21:59:13 +0000 (0:00:00.292) 0:05:35.868 ************ 2025-05-19 22:04:46.696948 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.696955 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.696961 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.696968 | orchestrator | 2025-05-19 22:04:46.696975 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.697006 | orchestrator | Monday 19 May 2025 21:59:13 +0000 (0:00:00.301) 0:05:36.170 ************ 2025-05-19 22:04:46.697013 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697020 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697031 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697037 | orchestrator | 2025-05-19 22:04:46.697044 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.697051 | orchestrator | Monday 19 May 2025 21:59:13 +0000 (0:00:00.325) 0:05:36.496 ************ 2025-05-19 22:04:46.697058 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697065 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697071 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697078 | orchestrator | 2025-05-19 22:04:46.697085 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.697092 | orchestrator | Monday 19 May 2025 21:59:14 +0000 (0:00:00.617) 0:05:37.113 ************ 2025-05-19 22:04:46.697098 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697105 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697112 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697118 | orchestrator | 2025-05-19 22:04:46.697125 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.697132 | orchestrator | Monday 19 May 2025 21:59:14 +0000 (0:00:00.337) 0:05:37.451 ************ 2025-05-19 22:04:46.697138 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.697145 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.697152 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.697159 | orchestrator | 2025-05-19 22:04:46.697165 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.697172 | orchestrator | Monday 19 May 2025 21:59:15 +0000 (0:00:00.341) 0:05:37.792 ************ 2025-05-19 22:04:46.697179 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.697186 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.697192 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.697203 | orchestrator | 2025-05-19 22:04:46.697219 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.697226 | orchestrator | Monday 19 May 2025 21:59:15 +0000 (0:00:00.331) 0:05:38.123 ************ 2025-05-19 22:04:46.697233 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.697239 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.697246 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.697253 | orchestrator | 2025-05-19 22:04:46.697260 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-19 22:04:46.697267 | orchestrator | Monday 19 May 2025 21:59:16 +0000 (0:00:00.826) 0:05:38.950 ************ 2025-05-19 22:04:46.697273 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:04:46.697280 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.697287 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.697293 | orchestrator | 2025-05-19 22:04:46.697300 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-19 22:04:46.697307 | orchestrator | Monday 19 May 2025 21:59:16 +0000 (0:00:00.627) 0:05:39.578 ************ 2025-05-19 22:04:46.697314 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.697320 | orchestrator | 2025-05-19 22:04:46.697327 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-19 22:04:46.697334 | orchestrator | Monday 19 May 2025 21:59:17 +0000 (0:00:00.588) 0:05:40.171 ************ 2025-05-19 22:04:46.697341 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.697347 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.697354 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.697361 | orchestrator | 2025-05-19 22:04:46.697367 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-19 22:04:46.697374 | orchestrator | Monday 19 May 2025 21:59:18 +0000 (0:00:00.952) 0:05:41.123 ************ 2025-05-19 22:04:46.697381 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697387 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697394 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697401 | orchestrator | 2025-05-19 22:04:46.697408 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-19 22:04:46.697414 | orchestrator | Monday 19 May 2025 21:59:18 +0000 (0:00:00.322) 0:05:41.446 ************ 2025-05-19 22:04:46.697421 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.697428 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.697435 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.697441 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-19 22:04:46.697448 | orchestrator | 2025-05-19 22:04:46.697455 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-19 22:04:46.697461 | orchestrator | Monday 19 May 2025 21:59:29 +0000 (0:00:10.487) 0:05:51.933 ************ 2025-05-19 22:04:46.697468 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.697475 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.697481 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.697488 | orchestrator | 2025-05-19 22:04:46.697495 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-19 22:04:46.697502 | orchestrator | Monday 19 May 2025 21:59:29 +0000 (0:00:00.473) 0:05:52.407 ************ 2025-05-19 22:04:46.697508 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 22:04:46.697515 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 22:04:46.697522 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 22:04:46.697548 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.697555 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.697562 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.697576 | orchestrator | 2025-05-19 22:04:46.697583 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-19 22:04:46.697589 | orchestrator | Monday 19 May 2025 21:59:32 +0000 (0:00:02.859) 0:05:55.266 ************ 2025-05-19 22:04:46.697619 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 22:04:46.697627 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 22:04:46.697633 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 22:04:46.697644 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:04:46.697651 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-19 22:04:46.697657 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-19 22:04:46.697664 | orchestrator | 2025-05-19 22:04:46.697671 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-19 22:04:46.697677 | orchestrator | Monday 19 May 2025 21:59:33 +0000 (0:00:01.300) 0:05:56.566 ************ 2025-05-19 22:04:46.697684 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.697691 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.697697 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.697704 | orchestrator | 2025-05-19 22:04:46.697711 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-19 22:04:46.697717 | orchestrator | Monday 19 May 2025 21:59:34 +0000 (0:00:00.720) 0:05:57.287 ************ 2025-05-19 22:04:46.697724 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697731 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697737 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697744 | orchestrator | 2025-05-19 22:04:46.697751 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-19 22:04:46.697758 | orchestrator | Monday 19 May 2025 21:59:35 +0000 (0:00:00.329) 0:05:57.616 ************ 2025-05-19 22:04:46.697764 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697771 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697777 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697784 | orchestrator | 2025-05-19 22:04:46.697791 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-19 22:04:46.697797 | orchestrator | Monday 19 May 2025 21:59:35 +0000 (0:00:00.332) 0:05:57.948 ************ 2025-05-19 22:04:46.697804 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.697811 | orchestrator | 2025-05-19 22:04:46.697817 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-19 22:04:46.697824 | orchestrator | Monday 19 May 2025 21:59:36 +0000 (0:00:00.917) 0:05:58.866 ************ 2025-05-19 22:04:46.697830 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697837 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697844 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697850 | orchestrator | 2025-05-19 22:04:46.697857 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-19 22:04:46.697864 | orchestrator | Monday 19 May 2025 21:59:36 +0000 (0:00:00.339) 0:05:59.205 ************ 2025-05-19 22:04:46.697870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.697877 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.697884 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.697890 | orchestrator | 2025-05-19 22:04:46.697897 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-19 22:04:46.697904 | orchestrator | Monday 19 May 2025 21:59:36 +0000 (0:00:00.322) 0:05:59.528 ************ 2025-05-19 22:04:46.697911 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.697917 | orchestrator | 2025-05-19 22:04:46.697924 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-19 22:04:46.697931 | orchestrator | Monday 19 May 2025 21:59:37 +0000 (0:00:00.863) 0:06:00.392 ************ 2025-05-19 22:04:46.697938 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.697949 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.697956 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.697963 | orchestrator | 2025-05-19 22:04:46.697969 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-19 22:04:46.697976 | orchestrator | Monday 19 May 2025 21:59:39 +0000 (0:00:01.275) 0:06:01.668 ************ 2025-05-19 22:04:46.697983 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.697989 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.697996 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.698002 | orchestrator | 2025-05-19 22:04:46.698009 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-19 22:04:46.698042 | orchestrator | Monday 19 May 2025 21:59:40 +0000 (0:00:01.175) 0:06:02.844 ************ 2025-05-19 22:04:46.698051 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.698057 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.698064 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.698071 | orchestrator | 2025-05-19 22:04:46.698077 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-19 22:04:46.698084 | orchestrator | Monday 19 May 2025 21:59:42 +0000 (0:00:02.073) 0:06:04.917 ************ 2025-05-19 22:04:46.698090 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.698097 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.698103 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.698110 | orchestrator | 2025-05-19 22:04:46.698117 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-19 22:04:46.698123 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:01.938) 0:06:06.855 ************ 2025-05-19 22:04:46.698130 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.698137 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.698143 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-19 22:04:46.698150 | orchestrator | 2025-05-19 22:04:46.698156 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-19 22:04:46.698163 | orchestrator | Monday 19 May 2025 21:59:44 +0000 (0:00:00.407) 0:06:07.263 ************ 2025-05-19 22:04:46.698170 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-19 22:04:46.698177 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-19 22:04:46.698204 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-19 22:04:46.698212 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-19 22:04:46.698223 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.698229 | orchestrator | 2025-05-19 22:04:46.698236 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-19 22:04:46.698243 | orchestrator | Monday 19 May 2025 22:00:08 +0000 (0:00:24.211) 0:06:31.474 ************ 2025-05-19 22:04:46.698250 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.698256 | orchestrator | 2025-05-19 22:04:46.698263 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-19 22:04:46.698270 | orchestrator | Monday 19 May 2025 22:00:10 +0000 (0:00:01.514) 0:06:32.989 ************ 2025-05-19 22:04:46.698276 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.698283 | orchestrator | 2025-05-19 22:04:46.698290 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-19 22:04:46.698297 | orchestrator | Monday 19 May 2025 22:00:11 +0000 (0:00:00.902) 0:06:33.891 ************ 2025-05-19 22:04:46.698303 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.698310 | orchestrator | 2025-05-19 22:04:46.698317 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-19 22:04:46.698323 | orchestrator | Monday 19 May 2025 22:00:11 +0000 (0:00:00.132) 0:06:34.023 ************ 2025-05-19 22:04:46.698335 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-19 22:04:46.698342 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-19 22:04:46.698348 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-19 22:04:46.698355 | orchestrator | 2025-05-19 22:04:46.698362 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-19 22:04:46.698368 | orchestrator | Monday 19 May 2025 22:00:17 +0000 (0:00:06.347) 0:06:40.371 ************ 2025-05-19 22:04:46.698375 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-19 22:04:46.698381 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-19 22:04:46.698388 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-19 22:04:46.698395 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-19 22:04:46.698401 | orchestrator | 2025-05-19 22:04:46.698408 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.698414 | orchestrator | Monday 19 May 2025 22:00:22 +0000 (0:00:04.652) 0:06:45.023 ************ 2025-05-19 22:04:46.698421 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.698428 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.698434 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.698441 | orchestrator | 2025-05-19 22:04:46.698447 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-19 22:04:46.698454 | orchestrator | Monday 19 May 2025 22:00:23 +0000 (0:00:00.957) 0:06:45.981 ************ 2025-05-19 22:04:46.698461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:04:46.698467 | orchestrator | 2025-05-19 22:04:46.698474 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-19 22:04:46.698481 | orchestrator | Monday 19 May 2025 22:00:23 +0000 (0:00:00.537) 0:06:46.519 ************ 2025-05-19 22:04:46.698488 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.698494 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.698501 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.698507 | orchestrator | 2025-05-19 22:04:46.698514 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-19 22:04:46.698521 | orchestrator | Monday 19 May 2025 22:00:24 +0000 (0:00:00.345) 0:06:46.865 ************ 2025-05-19 22:04:46.698547 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.698557 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.698564 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.698571 | orchestrator | 2025-05-19 22:04:46.698578 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-19 22:04:46.698584 | orchestrator | Monday 19 May 2025 22:00:25 +0000 (0:00:01.496) 0:06:48.362 ************ 2025-05-19 22:04:46.698591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-19 22:04:46.698597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-19 22:04:46.698604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-19 22:04:46.698611 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.698617 | orchestrator | 2025-05-19 22:04:46.698624 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-19 22:04:46.698631 | orchestrator | Monday 19 May 2025 22:00:26 +0000 (0:00:00.595) 0:06:48.958 ************ 2025-05-19 22:04:46.698637 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.698644 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.698651 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.698657 | orchestrator | 2025-05-19 22:04:46.698664 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-19 22:04:46.698671 | orchestrator | 2025-05-19 22:04:46.698678 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.698690 | orchestrator | Monday 19 May 2025 22:00:26 +0000 (0:00:00.510) 0:06:49.468 ************ 2025-05-19 22:04:46.698697 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.698703 | orchestrator | 2025-05-19 22:04:46.698710 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.698717 | orchestrator | Monday 19 May 2025 22:00:27 +0000 (0:00:00.728) 0:06:50.197 ************ 2025-05-19 22:04:46.698746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.698754 | orchestrator | 2025-05-19 22:04:46.698761 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.698772 | orchestrator | Monday 19 May 2025 22:00:28 +0000 (0:00:00.552) 0:06:50.750 ************ 2025-05-19 22:04:46.698778 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.698785 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.698792 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.698799 | orchestrator | 2025-05-19 22:04:46.698805 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.698812 | orchestrator | Monday 19 May 2025 22:00:28 +0000 (0:00:00.309) 0:06:51.060 ************ 2025-05-19 22:04:46.698819 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.698826 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.698832 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.698839 | orchestrator | 2025-05-19 22:04:46.698846 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.698852 | orchestrator | Monday 19 May 2025 22:00:29 +0000 (0:00:00.935) 0:06:51.995 ************ 2025-05-19 22:04:46.698859 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.698866 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.698872 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.698879 | orchestrator | 2025-05-19 22:04:46.698886 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.698893 | orchestrator | Monday 19 May 2025 22:00:30 +0000 (0:00:00.821) 0:06:52.817 ************ 2025-05-19 22:04:46.698899 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.698906 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.698913 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.698919 | orchestrator | 2025-05-19 22:04:46.698926 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.698933 | orchestrator | Monday 19 May 2025 22:00:31 +0000 (0:00:00.825) 0:06:53.642 ************ 2025-05-19 22:04:46.698940 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.698946 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.698953 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.698960 | orchestrator | 2025-05-19 22:04:46.698966 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.698973 | orchestrator | Monday 19 May 2025 22:00:31 +0000 (0:00:00.315) 0:06:53.958 ************ 2025-05-19 22:04:46.698980 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.698987 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.698993 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699000 | orchestrator | 2025-05-19 22:04:46.699007 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.699013 | orchestrator | Monday 19 May 2025 22:00:32 +0000 (0:00:00.692) 0:06:54.650 ************ 2025-05-19 22:04:46.699020 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699027 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699033 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699040 | orchestrator | 2025-05-19 22:04:46.699047 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.699054 | orchestrator | Monday 19 May 2025 22:00:32 +0000 (0:00:00.302) 0:06:54.953 ************ 2025-05-19 22:04:46.699060 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699074 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699081 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699088 | orchestrator | 2025-05-19 22:04:46.699095 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.699101 | orchestrator | Monday 19 May 2025 22:00:32 +0000 (0:00:00.643) 0:06:55.597 ************ 2025-05-19 22:04:46.699108 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699115 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699121 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699128 | orchestrator | 2025-05-19 22:04:46.699134 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.699141 | orchestrator | Monday 19 May 2025 22:00:33 +0000 (0:00:00.649) 0:06:56.246 ************ 2025-05-19 22:04:46.699148 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699155 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699161 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699168 | orchestrator | 2025-05-19 22:04:46.699174 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.699181 | orchestrator | Monday 19 May 2025 22:00:34 +0000 (0:00:00.561) 0:06:56.807 ************ 2025-05-19 22:04:46.699188 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699195 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699201 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699208 | orchestrator | 2025-05-19 22:04:46.699214 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.699221 | orchestrator | Monday 19 May 2025 22:00:34 +0000 (0:00:00.307) 0:06:57.115 ************ 2025-05-19 22:04:46.699228 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699234 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699241 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699248 | orchestrator | 2025-05-19 22:04:46.699254 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.699261 | orchestrator | Monday 19 May 2025 22:00:34 +0000 (0:00:00.333) 0:06:57.448 ************ 2025-05-19 22:04:46.699268 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699275 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699281 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699288 | orchestrator | 2025-05-19 22:04:46.699294 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.699301 | orchestrator | Monday 19 May 2025 22:00:35 +0000 (0:00:00.335) 0:06:57.784 ************ 2025-05-19 22:04:46.699308 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699314 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699321 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699328 | orchestrator | 2025-05-19 22:04:46.699334 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.699341 | orchestrator | Monday 19 May 2025 22:00:35 +0000 (0:00:00.571) 0:06:58.356 ************ 2025-05-19 22:04:46.699348 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699354 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699361 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699368 | orchestrator | 2025-05-19 22:04:46.699378 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.699385 | orchestrator | Monday 19 May 2025 22:00:36 +0000 (0:00:00.376) 0:06:58.732 ************ 2025-05-19 22:04:46.699395 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699402 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699408 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699415 | orchestrator | 2025-05-19 22:04:46.699421 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.699428 | orchestrator | Monday 19 May 2025 22:00:36 +0000 (0:00:00.319) 0:06:59.052 ************ 2025-05-19 22:04:46.699435 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699442 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699448 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699459 | orchestrator | 2025-05-19 22:04:46.699466 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.699472 | orchestrator | Monday 19 May 2025 22:00:36 +0000 (0:00:00.316) 0:06:59.369 ************ 2025-05-19 22:04:46.699479 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699486 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699493 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699499 | orchestrator | 2025-05-19 22:04:46.699506 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.699513 | orchestrator | Monday 19 May 2025 22:00:37 +0000 (0:00:00.604) 0:06:59.973 ************ 2025-05-19 22:04:46.699520 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699538 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699546 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699552 | orchestrator | 2025-05-19 22:04:46.699559 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-19 22:04:46.699566 | orchestrator | Monday 19 May 2025 22:00:37 +0000 (0:00:00.518) 0:07:00.491 ************ 2025-05-19 22:04:46.699572 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699579 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699586 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699592 | orchestrator | 2025-05-19 22:04:46.699599 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-19 22:04:46.699606 | orchestrator | Monday 19 May 2025 22:00:38 +0000 (0:00:00.310) 0:07:00.802 ************ 2025-05-19 22:04:46.699612 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 22:04:46.699619 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:04:46.699626 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:04:46.699632 | orchestrator | 2025-05-19 22:04:46.699639 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-19 22:04:46.699646 | orchestrator | Monday 19 May 2025 22:00:39 +0000 (0:00:00.940) 0:07:01.742 ************ 2025-05-19 22:04:46.699652 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.699659 | orchestrator | 2025-05-19 22:04:46.699666 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-19 22:04:46.699672 | orchestrator | Monday 19 May 2025 22:00:39 +0000 (0:00:00.786) 0:07:02.529 ************ 2025-05-19 22:04:46.699679 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699686 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699692 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699699 | orchestrator | 2025-05-19 22:04:46.699706 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-19 22:04:46.699712 | orchestrator | Monday 19 May 2025 22:00:40 +0000 (0:00:00.310) 0:07:02.840 ************ 2025-05-19 22:04:46.699719 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699726 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699732 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699739 | orchestrator | 2025-05-19 22:04:46.699746 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-19 22:04:46.699752 | orchestrator | Monday 19 May 2025 22:00:40 +0000 (0:00:00.296) 0:07:03.136 ************ 2025-05-19 22:04:46.699759 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699766 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699772 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699779 | orchestrator | 2025-05-19 22:04:46.699785 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-19 22:04:46.699792 | orchestrator | Monday 19 May 2025 22:00:41 +0000 (0:00:00.924) 0:07:04.060 ************ 2025-05-19 22:04:46.699799 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.699805 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.699812 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.699818 | orchestrator | 2025-05-19 22:04:46.699830 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-19 22:04:46.699836 | orchestrator | Monday 19 May 2025 22:00:41 +0000 (0:00:00.344) 0:07:04.404 ************ 2025-05-19 22:04:46.699843 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 22:04:46.699850 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 22:04:46.699857 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-19 22:04:46.699863 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 22:04:46.699870 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 22:04:46.699877 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-19 22:04:46.699883 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 22:04:46.699890 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 22:04:46.699901 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 22:04:46.699908 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 22:04:46.699919 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 22:04:46.699926 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 22:04:46.699932 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-19 22:04:46.699939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-19 22:04:46.699946 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-19 22:04:46.699952 | orchestrator | 2025-05-19 22:04:46.699959 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-19 22:04:46.699965 | orchestrator | Monday 19 May 2025 22:00:44 +0000 (0:00:02.941) 0:07:07.346 ************ 2025-05-19 22:04:46.699972 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.699979 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.699985 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.699992 | orchestrator | 2025-05-19 22:04:46.699999 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-19 22:04:46.700005 | orchestrator | Monday 19 May 2025 22:00:45 +0000 (0:00:00.290) 0:07:07.637 ************ 2025-05-19 22:04:46.700012 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.700018 | orchestrator | 2025-05-19 22:04:46.700025 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-19 22:04:46.700032 | orchestrator | Monday 19 May 2025 22:00:45 +0000 (0:00:00.786) 0:07:08.424 ************ 2025-05-19 22:04:46.700038 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 22:04:46.700045 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 22:04:46.700051 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-19 22:04:46.700058 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-19 22:04:46.700065 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-19 22:04:46.700071 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-19 22:04:46.700078 | orchestrator | 2025-05-19 22:04:46.700085 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-19 22:04:46.700092 | orchestrator | Monday 19 May 2025 22:00:46 +0000 (0:00:00.892) 0:07:09.316 ************ 2025-05-19 22:04:46.700098 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.700111 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.700118 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.700124 | orchestrator | 2025-05-19 22:04:46.700131 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-19 22:04:46.700138 | orchestrator | Monday 19 May 2025 22:00:48 +0000 (0:00:01.934) 0:07:11.250 ************ 2025-05-19 22:04:46.700144 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:04:46.700151 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.700158 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.700164 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:04:46.700171 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 22:04:46.700177 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.700184 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:04:46.700190 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 22:04:46.700197 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.700204 | orchestrator | 2025-05-19 22:04:46.700210 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-19 22:04:46.700217 | orchestrator | Monday 19 May 2025 22:00:50 +0000 (0:00:01.395) 0:07:12.646 ************ 2025-05-19 22:04:46.700224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.700230 | orchestrator | 2025-05-19 22:04:46.700237 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-19 22:04:46.700244 | orchestrator | Monday 19 May 2025 22:00:52 +0000 (0:00:02.146) 0:07:14.792 ************ 2025-05-19 22:04:46.700250 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.700257 | orchestrator | 2025-05-19 22:04:46.700263 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-19 22:04:46.700270 | orchestrator | Monday 19 May 2025 22:00:52 +0000 (0:00:00.514) 0:07:15.306 ************ 2025-05-19 22:04:46.700277 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-52cfe21f-2cf0-5660-8f5b-0412bede7d5f', 'data_vg': 'ceph-52cfe21f-2cf0-5660-8f5b-0412bede7d5f'}) 2025-05-19 22:04:46.700284 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-d2161015-9b2d-55ef-85cd-b20f941db83a', 'data_vg': 'ceph-d2161015-9b2d-55ef-85cd-b20f941db83a'}) 2025-05-19 22:04:46.700291 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-d6c00661-cf2a-5067-a507-d2ca4df6447b', 'data_vg': 'ceph-d6c00661-cf2a-5067-a507-d2ca4df6447b'}) 2025-05-19 22:04:46.700298 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9', 'data_vg': 'ceph-8ad6e576-16ee-5df9-adc2-5fd1c09e2bb9'}) 2025-05-19 22:04:46.700308 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8', 'data_vg': 'ceph-cfdd3ed5-b98d-51b3-b2a5-29887bcc1fa8'}) 2025-05-19 22:04:46.700318 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-73ec3cc1-218e-51bb-a362-2e871742ea52', 'data_vg': 'ceph-73ec3cc1-218e-51bb-a362-2e871742ea52'}) 2025-05-19 22:04:46.700325 | orchestrator | 2025-05-19 22:04:46.700332 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-19 22:04:46.700338 | orchestrator | Monday 19 May 2025 22:01:32 +0000 (0:00:39.502) 0:07:54.809 ************ 2025-05-19 22:04:46.700345 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.700352 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.700358 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.700365 | orchestrator | 2025-05-19 22:04:46.700372 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-19 22:04:46.700378 | orchestrator | Monday 19 May 2025 22:01:32 +0000 (0:00:00.566) 0:07:55.375 ************ 2025-05-19 22:04:46.700385 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.700397 | orchestrator | 2025-05-19 22:04:46.700403 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-19 22:04:46.700410 | orchestrator | Monday 19 May 2025 22:01:33 +0000 (0:00:00.523) 0:07:55.898 ************ 2025-05-19 22:04:46.700417 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.700423 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.700430 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.700437 | orchestrator | 2025-05-19 22:04:46.700443 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-19 22:04:46.700450 | orchestrator | Monday 19 May 2025 22:01:33 +0000 (0:00:00.650) 0:07:56.548 ************ 2025-05-19 22:04:46.700457 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.700463 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.700470 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.700477 | orchestrator | 2025-05-19 22:04:46.700483 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-19 22:04:46.700490 | orchestrator | Monday 19 May 2025 22:01:36 +0000 (0:00:02.714) 0:07:59.263 ************ 2025-05-19 22:04:46.700497 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.700503 | orchestrator | 2025-05-19 22:04:46.700510 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-19 22:04:46.700516 | orchestrator | Monday 19 May 2025 22:01:37 +0000 (0:00:00.552) 0:07:59.815 ************ 2025-05-19 22:04:46.700523 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.700565 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.700572 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.700579 | orchestrator | 2025-05-19 22:04:46.700586 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-19 22:04:46.700592 | orchestrator | Monday 19 May 2025 22:01:38 +0000 (0:00:01.232) 0:08:01.047 ************ 2025-05-19 22:04:46.700599 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.700606 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.700612 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.700619 | orchestrator | 2025-05-19 22:04:46.700626 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-19 22:04:46.700632 | orchestrator | Monday 19 May 2025 22:01:39 +0000 (0:00:01.422) 0:08:02.470 ************ 2025-05-19 22:04:46.700639 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.700646 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.700653 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.700659 | orchestrator | 2025-05-19 22:04:46.700666 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-19 22:04:46.700673 | orchestrator | Monday 19 May 2025 22:01:41 +0000 (0:00:01.663) 0:08:04.134 ************ 2025-05-19 22:04:46.700679 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.700686 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.700693 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.700699 | orchestrator | 2025-05-19 22:04:46.700706 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-19 22:04:46.700713 | orchestrator | Monday 19 May 2025 22:01:41 +0000 (0:00:00.319) 0:08:04.453 ************ 2025-05-19 22:04:46.700719 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.700726 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.700733 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.700739 | orchestrator | 2025-05-19 22:04:46.700746 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-19 22:04:46.700753 | orchestrator | Monday 19 May 2025 22:01:42 +0000 (0:00:00.285) 0:08:04.739 ************ 2025-05-19 22:04:46.700759 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-19 22:04:46.700766 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-19 22:04:46.700773 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-05-19 22:04:46.700779 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 22:04:46.700786 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-05-19 22:04:46.700799 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-05-19 22:04:46.700805 | orchestrator | 2025-05-19 22:04:46.700811 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-19 22:04:46.700817 | orchestrator | Monday 19 May 2025 22:01:43 +0000 (0:00:01.173) 0:08:05.912 ************ 2025-05-19 22:04:46.700824 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-19 22:04:46.700830 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-19 22:04:46.700836 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-19 22:04:46.700842 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-19 22:04:46.700848 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-19 22:04:46.700854 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-19 22:04:46.700860 | orchestrator | 2025-05-19 22:04:46.700867 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-19 22:04:46.700873 | orchestrator | Monday 19 May 2025 22:01:45 +0000 (0:00:02.114) 0:08:08.026 ************ 2025-05-19 22:04:46.700879 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-19 22:04:46.700885 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-05-19 22:04:46.700895 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-19 22:04:46.700901 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-05-19 22:04:46.700907 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-19 22:04:46.700917 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-05-19 22:04:46.700923 | orchestrator | 2025-05-19 22:04:46.700929 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-19 22:04:46.700936 | orchestrator | Monday 19 May 2025 22:01:48 +0000 (0:00:03.500) 0:08:11.527 ************ 2025-05-19 22:04:46.700942 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.700948 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.700954 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.700960 | orchestrator | 2025-05-19 22:04:46.700966 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-19 22:04:46.700973 | orchestrator | Monday 19 May 2025 22:01:51 +0000 (0:00:02.766) 0:08:14.294 ************ 2025-05-19 22:04:46.700979 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.700985 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.700991 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-19 22:04:46.700997 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.701004 | orchestrator | 2025-05-19 22:04:46.701010 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-19 22:04:46.701016 | orchestrator | Monday 19 May 2025 22:02:04 +0000 (0:00:12.790) 0:08:27.084 ************ 2025-05-19 22:04:46.701022 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701029 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701035 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701041 | orchestrator | 2025-05-19 22:04:46.701047 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.701053 | orchestrator | Monday 19 May 2025 22:02:05 +0000 (0:00:00.641) 0:08:27.726 ************ 2025-05-19 22:04:46.701059 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701065 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701074 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701084 | orchestrator | 2025-05-19 22:04:46.701095 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-19 22:04:46.701105 | orchestrator | Monday 19 May 2025 22:02:05 +0000 (0:00:00.370) 0:08:28.097 ************ 2025-05-19 22:04:46.701116 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.701126 | orchestrator | 2025-05-19 22:04:46.701137 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-19 22:04:46.701154 | orchestrator | Monday 19 May 2025 22:02:05 +0000 (0:00:00.442) 0:08:28.539 ************ 2025-05-19 22:04:46.701162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.701168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.701174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.701180 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701186 | orchestrator | 2025-05-19 22:04:46.701192 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-19 22:04:46.701199 | orchestrator | Monday 19 May 2025 22:02:06 +0000 (0:00:00.320) 0:08:28.860 ************ 2025-05-19 22:04:46.701205 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701211 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701217 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701223 | orchestrator | 2025-05-19 22:04:46.701229 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-19 22:04:46.701235 | orchestrator | Monday 19 May 2025 22:02:06 +0000 (0:00:00.231) 0:08:29.091 ************ 2025-05-19 22:04:46.701241 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701248 | orchestrator | 2025-05-19 22:04:46.701254 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-19 22:04:46.701260 | orchestrator | Monday 19 May 2025 22:02:06 +0000 (0:00:00.191) 0:08:29.283 ************ 2025-05-19 22:04:46.701266 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701272 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701278 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701284 | orchestrator | 2025-05-19 22:04:46.701291 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-19 22:04:46.701297 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.447) 0:08:29.731 ************ 2025-05-19 22:04:46.701303 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701309 | orchestrator | 2025-05-19 22:04:46.701315 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-19 22:04:46.701321 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.195) 0:08:29.927 ************ 2025-05-19 22:04:46.701327 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701333 | orchestrator | 2025-05-19 22:04:46.701340 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-19 22:04:46.701346 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.198) 0:08:30.126 ************ 2025-05-19 22:04:46.701352 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701358 | orchestrator | 2025-05-19 22:04:46.701364 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-19 22:04:46.701370 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.111) 0:08:30.237 ************ 2025-05-19 22:04:46.701376 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701382 | orchestrator | 2025-05-19 22:04:46.701389 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-19 22:04:46.701395 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.186) 0:08:30.423 ************ 2025-05-19 22:04:46.701401 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701407 | orchestrator | 2025-05-19 22:04:46.701413 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-19 22:04:46.701419 | orchestrator | Monday 19 May 2025 22:02:07 +0000 (0:00:00.176) 0:08:30.599 ************ 2025-05-19 22:04:46.701429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.701436 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.701442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.701452 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701458 | orchestrator | 2025-05-19 22:04:46.701464 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-19 22:04:46.701471 | orchestrator | Monday 19 May 2025 22:02:08 +0000 (0:00:00.392) 0:08:30.992 ************ 2025-05-19 22:04:46.701481 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701487 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701493 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701500 | orchestrator | 2025-05-19 22:04:46.701506 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-19 22:04:46.701512 | orchestrator | Monday 19 May 2025 22:02:08 +0000 (0:00:00.280) 0:08:31.272 ************ 2025-05-19 22:04:46.701518 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701524 | orchestrator | 2025-05-19 22:04:46.701544 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-19 22:04:46.701550 | orchestrator | Monday 19 May 2025 22:02:09 +0000 (0:00:00.738) 0:08:32.011 ************ 2025-05-19 22:04:46.701556 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701562 | orchestrator | 2025-05-19 22:04:46.701569 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-19 22:04:46.701575 | orchestrator | 2025-05-19 22:04:46.701581 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.701587 | orchestrator | Monday 19 May 2025 22:02:10 +0000 (0:00:00.656) 0:08:32.668 ************ 2025-05-19 22:04:46.701593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.701600 | orchestrator | 2025-05-19 22:04:46.701606 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.701612 | orchestrator | Monday 19 May 2025 22:02:11 +0000 (0:00:01.170) 0:08:33.839 ************ 2025-05-19 22:04:46.701619 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.701625 | orchestrator | 2025-05-19 22:04:46.701631 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.701637 | orchestrator | Monday 19 May 2025 22:02:12 +0000 (0:00:01.234) 0:08:35.074 ************ 2025-05-19 22:04:46.701644 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701650 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701656 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.701662 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.701668 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.701674 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701680 | orchestrator | 2025-05-19 22:04:46.701687 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.701693 | orchestrator | Monday 19 May 2025 22:02:13 +0000 (0:00:00.828) 0:08:35.903 ************ 2025-05-19 22:04:46.701699 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.701705 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.701711 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.701717 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.701723 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.701730 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.701736 | orchestrator | 2025-05-19 22:04:46.701742 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.701748 | orchestrator | Monday 19 May 2025 22:02:14 +0000 (0:00:00.967) 0:08:36.870 ************ 2025-05-19 22:04:46.701754 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.701760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.701767 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.701773 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.701779 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.701785 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.701791 | orchestrator | 2025-05-19 22:04:46.701797 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.701803 | orchestrator | Monday 19 May 2025 22:02:15 +0000 (0:00:01.245) 0:08:38.116 ************ 2025-05-19 22:04:46.701810 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.701820 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.701826 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.701832 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.701839 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.701845 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.701851 | orchestrator | 2025-05-19 22:04:46.701857 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.701863 | orchestrator | Monday 19 May 2025 22:02:16 +0000 (0:00:00.977) 0:08:39.093 ************ 2025-05-19 22:04:46.701870 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701876 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701882 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.701888 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.701894 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.701900 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701906 | orchestrator | 2025-05-19 22:04:46.701912 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.701919 | orchestrator | Monday 19 May 2025 22:02:17 +0000 (0:00:00.834) 0:08:39.928 ************ 2025-05-19 22:04:46.701925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.701931 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.701937 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.701943 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.701949 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.701956 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.701962 | orchestrator | 2025-05-19 22:04:46.701968 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.701974 | orchestrator | Monday 19 May 2025 22:02:17 +0000 (0:00:00.561) 0:08:40.489 ************ 2025-05-19 22:04:46.701984 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.701990 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.701996 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702002 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702008 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702037 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702045 | orchestrator | 2025-05-19 22:04:46.702051 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.702058 | orchestrator | Monday 19 May 2025 22:02:18 +0000 (0:00:00.737) 0:08:41.227 ************ 2025-05-19 22:04:46.702064 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702070 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702076 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702083 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702089 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702095 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702101 | orchestrator | 2025-05-19 22:04:46.702107 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.702114 | orchestrator | Monday 19 May 2025 22:02:19 +0000 (0:00:01.051) 0:08:42.278 ************ 2025-05-19 22:04:46.702120 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702126 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702132 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702138 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702144 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702150 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702157 | orchestrator | 2025-05-19 22:04:46.702163 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.702169 | orchestrator | Monday 19 May 2025 22:02:21 +0000 (0:00:01.476) 0:08:43.754 ************ 2025-05-19 22:04:46.702175 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702182 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702188 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702194 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702200 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702216 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702222 | orchestrator | 2025-05-19 22:04:46.702228 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.702234 | orchestrator | Monday 19 May 2025 22:02:21 +0000 (0:00:00.599) 0:08:44.354 ************ 2025-05-19 22:04:46.702240 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702247 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702253 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702259 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702265 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702271 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702277 | orchestrator | 2025-05-19 22:04:46.702283 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.702290 | orchestrator | Monday 19 May 2025 22:02:22 +0000 (0:00:00.794) 0:08:45.149 ************ 2025-05-19 22:04:46.702296 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702302 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702308 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702314 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702320 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702326 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702333 | orchestrator | 2025-05-19 22:04:46.702339 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.702345 | orchestrator | Monday 19 May 2025 22:02:23 +0000 (0:00:00.604) 0:08:45.753 ************ 2025-05-19 22:04:46.702351 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702357 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702364 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702370 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702376 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702382 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702388 | orchestrator | 2025-05-19 22:04:46.702395 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.702401 | orchestrator | Monday 19 May 2025 22:02:23 +0000 (0:00:00.796) 0:08:46.550 ************ 2025-05-19 22:04:46.702407 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702413 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702419 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702426 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702432 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702438 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702444 | orchestrator | 2025-05-19 22:04:46.702450 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.702457 | orchestrator | Monday 19 May 2025 22:02:24 +0000 (0:00:00.601) 0:08:47.152 ************ 2025-05-19 22:04:46.702463 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702469 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702481 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702487 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702493 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702499 | orchestrator | 2025-05-19 22:04:46.702506 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.702512 | orchestrator | Monday 19 May 2025 22:02:25 +0000 (0:00:00.816) 0:08:47.968 ************ 2025-05-19 22:04:46.702518 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:04:46.702524 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:04:46.702544 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:04:46.702550 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702556 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702562 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702568 | orchestrator | 2025-05-19 22:04:46.702575 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.702581 | orchestrator | Monday 19 May 2025 22:02:25 +0000 (0:00:00.561) 0:08:48.529 ************ 2025-05-19 22:04:46.702592 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702598 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702605 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702611 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.702617 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.702623 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.702629 | orchestrator | 2025-05-19 22:04:46.702635 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.702645 | orchestrator | Monday 19 May 2025 22:02:26 +0000 (0:00:00.799) 0:08:49.329 ************ 2025-05-19 22:04:46.702652 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702658 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702664 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702670 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702676 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702697 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702703 | orchestrator | 2025-05-19 22:04:46.702710 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.702716 | orchestrator | Monday 19 May 2025 22:02:27 +0000 (0:00:00.601) 0:08:49.930 ************ 2025-05-19 22:04:46.702722 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702728 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.702734 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.702740 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.702747 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.702753 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.702759 | orchestrator | 2025-05-19 22:04:46.702765 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-19 22:04:46.702771 | orchestrator | Monday 19 May 2025 22:02:28 +0000 (0:00:01.166) 0:08:51.097 ************ 2025-05-19 22:04:46.702777 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.702784 | orchestrator | 2025-05-19 22:04:46.702790 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-19 22:04:46.702796 | orchestrator | Monday 19 May 2025 22:02:32 +0000 (0:00:03.834) 0:08:54.932 ************ 2025-05-19 22:04:46.702802 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702808 | orchestrator | 2025-05-19 22:04:46.702815 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-19 22:04:46.702821 | orchestrator | Monday 19 May 2025 22:02:34 +0000 (0:00:01.966) 0:08:56.899 ************ 2025-05-19 22:04:46.702827 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.702833 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.702840 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.702846 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.702852 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.702858 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.702864 | orchestrator | 2025-05-19 22:04:46.702871 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-19 22:04:46.702877 | orchestrator | Monday 19 May 2025 22:02:35 +0000 (0:00:01.696) 0:08:58.596 ************ 2025-05-19 22:04:46.702883 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.702889 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.702895 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.702901 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.702908 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.702914 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.702920 | orchestrator | 2025-05-19 22:04:46.702926 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-19 22:04:46.702932 | orchestrator | Monday 19 May 2025 22:02:36 +0000 (0:00:00.955) 0:08:59.551 ************ 2025-05-19 22:04:46.702939 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.702946 | orchestrator | 2025-05-19 22:04:46.702957 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-19 22:04:46.702963 | orchestrator | Monday 19 May 2025 22:02:38 +0000 (0:00:01.363) 0:09:00.915 ************ 2025-05-19 22:04:46.702969 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.702975 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.702981 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.702988 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.702994 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.703000 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.703006 | orchestrator | 2025-05-19 22:04:46.703012 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-19 22:04:46.703019 | orchestrator | Monday 19 May 2025 22:02:39 +0000 (0:00:01.669) 0:09:02.585 ************ 2025-05-19 22:04:46.703025 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.703031 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.703037 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.703043 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.703049 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.703056 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.703062 | orchestrator | 2025-05-19 22:04:46.703068 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-19 22:04:46.703074 | orchestrator | Monday 19 May 2025 22:02:42 +0000 (0:00:02.986) 0:09:05.571 ************ 2025-05-19 22:04:46.703081 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.703087 | orchestrator | 2025-05-19 22:04:46.703093 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-19 22:04:46.703099 | orchestrator | Monday 19 May 2025 22:02:44 +0000 (0:00:01.231) 0:09:06.803 ************ 2025-05-19 22:04:46.703106 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.703112 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.703118 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.703124 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703130 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703137 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703143 | orchestrator | 2025-05-19 22:04:46.703149 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-19 22:04:46.703155 | orchestrator | Monday 19 May 2025 22:02:45 +0000 (0:00:00.853) 0:09:07.656 ************ 2025-05-19 22:04:46.703162 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:04:46.703168 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:04:46.703174 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:04:46.703180 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.703186 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.703193 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.703199 | orchestrator | 2025-05-19 22:04:46.703205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-19 22:04:46.703211 | orchestrator | Monday 19 May 2025 22:02:47 +0000 (0:00:02.049) 0:09:09.706 ************ 2025-05-19 22:04:46.703221 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:04:46.703227 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:04:46.703233 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:04:46.703239 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703246 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703252 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703258 | orchestrator | 2025-05-19 22:04:46.703268 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-19 22:04:46.703274 | orchestrator | 2025-05-19 22:04:46.703281 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.703287 | orchestrator | Monday 19 May 2025 22:02:48 +0000 (0:00:01.058) 0:09:10.764 ************ 2025-05-19 22:04:46.703293 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.703303 | orchestrator | 2025-05-19 22:04:46.703310 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.703316 | orchestrator | Monday 19 May 2025 22:02:48 +0000 (0:00:00.486) 0:09:11.250 ************ 2025-05-19 22:04:46.703322 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.703328 | orchestrator | 2025-05-19 22:04:46.703334 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.703341 | orchestrator | Monday 19 May 2025 22:02:49 +0000 (0:00:00.723) 0:09:11.974 ************ 2025-05-19 22:04:46.703347 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703353 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703359 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703365 | orchestrator | 2025-05-19 22:04:46.703372 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.703378 | orchestrator | Monday 19 May 2025 22:02:49 +0000 (0:00:00.341) 0:09:12.315 ************ 2025-05-19 22:04:46.703384 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703390 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703397 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703403 | orchestrator | 2025-05-19 22:04:46.703409 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.703415 | orchestrator | Monday 19 May 2025 22:02:50 +0000 (0:00:00.695) 0:09:13.010 ************ 2025-05-19 22:04:46.703421 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703428 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703434 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703440 | orchestrator | 2025-05-19 22:04:46.703446 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.703452 | orchestrator | Monday 19 May 2025 22:02:51 +0000 (0:00:00.959) 0:09:13.970 ************ 2025-05-19 22:04:46.703459 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703465 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703471 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703477 | orchestrator | 2025-05-19 22:04:46.703483 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.703490 | orchestrator | Monday 19 May 2025 22:02:52 +0000 (0:00:00.676) 0:09:14.646 ************ 2025-05-19 22:04:46.703496 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703502 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703508 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703515 | orchestrator | 2025-05-19 22:04:46.703521 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.703537 | orchestrator | Monday 19 May 2025 22:02:52 +0000 (0:00:00.305) 0:09:14.952 ************ 2025-05-19 22:04:46.703544 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703550 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703556 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703562 | orchestrator | 2025-05-19 22:04:46.703568 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.703574 | orchestrator | Monday 19 May 2025 22:02:52 +0000 (0:00:00.289) 0:09:15.241 ************ 2025-05-19 22:04:46.703581 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703587 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703593 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703599 | orchestrator | 2025-05-19 22:04:46.703605 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.703611 | orchestrator | Monday 19 May 2025 22:02:53 +0000 (0:00:00.523) 0:09:15.765 ************ 2025-05-19 22:04:46.703617 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703624 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703630 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703636 | orchestrator | 2025-05-19 22:04:46.703642 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.703654 | orchestrator | Monday 19 May 2025 22:02:53 +0000 (0:00:00.676) 0:09:16.442 ************ 2025-05-19 22:04:46.703660 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703666 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703672 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703679 | orchestrator | 2025-05-19 22:04:46.703685 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.703691 | orchestrator | Monday 19 May 2025 22:02:54 +0000 (0:00:00.692) 0:09:17.134 ************ 2025-05-19 22:04:46.703697 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703703 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703710 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703716 | orchestrator | 2025-05-19 22:04:46.703722 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.703728 | orchestrator | Monday 19 May 2025 22:02:54 +0000 (0:00:00.290) 0:09:17.425 ************ 2025-05-19 22:04:46.703735 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703741 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703747 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703753 | orchestrator | 2025-05-19 22:04:46.703759 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.703765 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.519) 0:09:17.945 ************ 2025-05-19 22:04:46.703771 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703778 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703784 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703790 | orchestrator | 2025-05-19 22:04:46.703799 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.703806 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.311) 0:09:18.256 ************ 2025-05-19 22:04:46.703812 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703818 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703828 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703834 | orchestrator | 2025-05-19 22:04:46.703841 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.703847 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.317) 0:09:18.574 ************ 2025-05-19 22:04:46.703853 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.703859 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.703865 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.703871 | orchestrator | 2025-05-19 22:04:46.703877 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.703883 | orchestrator | Monday 19 May 2025 22:02:56 +0000 (0:00:00.325) 0:09:18.900 ************ 2025-05-19 22:04:46.703890 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703896 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703902 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703908 | orchestrator | 2025-05-19 22:04:46.703914 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.703921 | orchestrator | Monday 19 May 2025 22:02:56 +0000 (0:00:00.537) 0:09:19.437 ************ 2025-05-19 22:04:46.703927 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703933 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703939 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703945 | orchestrator | 2025-05-19 22:04:46.703951 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.703957 | orchestrator | Monday 19 May 2025 22:02:57 +0000 (0:00:00.290) 0:09:19.727 ************ 2025-05-19 22:04:46.703964 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.703970 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.703976 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.703982 | orchestrator | 2025-05-19 22:04:46.703988 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.703994 | orchestrator | Monday 19 May 2025 22:02:57 +0000 (0:00:00.299) 0:09:20.027 ************ 2025-05-19 22:04:46.704005 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.704011 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.704018 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.704024 | orchestrator | 2025-05-19 22:04:46.704030 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.704036 | orchestrator | Monday 19 May 2025 22:02:57 +0000 (0:00:00.315) 0:09:20.342 ************ 2025-05-19 22:04:46.704042 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.704049 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.704055 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.704061 | orchestrator | 2025-05-19 22:04:46.704067 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-19 22:04:46.704073 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:00.819) 0:09:21.162 ************ 2025-05-19 22:04:46.704079 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.704086 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.704092 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-19 22:04:46.704098 | orchestrator | 2025-05-19 22:04:46.704104 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-19 22:04:46.704110 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:00.383) 0:09:21.545 ************ 2025-05-19 22:04:46.704116 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.704123 | orchestrator | 2025-05-19 22:04:46.704129 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-19 22:04:46.704135 | orchestrator | Monday 19 May 2025 22:03:01 +0000 (0:00:02.139) 0:09:23.684 ************ 2025-05-19 22:04:46.704143 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-19 22:04:46.704151 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.704157 | orchestrator | 2025-05-19 22:04:46.704163 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-19 22:04:46.704170 | orchestrator | Monday 19 May 2025 22:03:01 +0000 (0:00:00.216) 0:09:23.901 ************ 2025-05-19 22:04:46.704177 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:04:46.704190 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:04:46.704196 | orchestrator | 2025-05-19 22:04:46.704203 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-19 22:04:46.704209 | orchestrator | Monday 19 May 2025 22:03:09 +0000 (0:00:08.199) 0:09:32.101 ************ 2025-05-19 22:04:46.704215 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:04:46.704221 | orchestrator | 2025-05-19 22:04:46.704227 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-19 22:04:46.704234 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:03.524) 0:09:35.626 ************ 2025-05-19 22:04:46.704240 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.704246 | orchestrator | 2025-05-19 22:04:46.704255 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-19 22:04:46.704262 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:00.555) 0:09:36.181 ************ 2025-05-19 22:04:46.704271 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 22:04:46.704277 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 22:04:46.704287 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-19 22:04:46.704294 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-19 22:04:46.704300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-19 22:04:46.704306 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-19 22:04:46.704312 | orchestrator | 2025-05-19 22:04:46.704318 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-19 22:04:46.704324 | orchestrator | Monday 19 May 2025 22:03:14 +0000 (0:00:01.004) 0:09:37.186 ************ 2025-05-19 22:04:46.704331 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.704337 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.704343 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.704349 | orchestrator | 2025-05-19 22:04:46.704355 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-19 22:04:46.704361 | orchestrator | Monday 19 May 2025 22:03:16 +0000 (0:00:02.206) 0:09:39.392 ************ 2025-05-19 22:04:46.704368 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:04:46.704374 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.704380 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704386 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:04:46.704392 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 22:04:46.704399 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704405 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:04:46.704411 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 22:04:46.704417 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704423 | orchestrator | 2025-05-19 22:04:46.704429 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-19 22:04:46.704435 | orchestrator | Monday 19 May 2025 22:03:18 +0000 (0:00:01.314) 0:09:40.707 ************ 2025-05-19 22:04:46.704442 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704448 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704454 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704460 | orchestrator | 2025-05-19 22:04:46.704466 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-19 22:04:46.704472 | orchestrator | Monday 19 May 2025 22:03:20 +0000 (0:00:02.576) 0:09:43.283 ************ 2025-05-19 22:04:46.704479 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.704485 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.704491 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.704497 | orchestrator | 2025-05-19 22:04:46.704503 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-19 22:04:46.704509 | orchestrator | Monday 19 May 2025 22:03:20 +0000 (0:00:00.303) 0:09:43.587 ************ 2025-05-19 22:04:46.704516 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.704522 | orchestrator | 2025-05-19 22:04:46.704561 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-19 22:04:46.704568 | orchestrator | Monday 19 May 2025 22:03:21 +0000 (0:00:00.608) 0:09:44.196 ************ 2025-05-19 22:04:46.704574 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.704580 | orchestrator | 2025-05-19 22:04:46.704586 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-19 22:04:46.704593 | orchestrator | Monday 19 May 2025 22:03:22 +0000 (0:00:00.528) 0:09:44.724 ************ 2025-05-19 22:04:46.704599 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704605 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704612 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704623 | orchestrator | 2025-05-19 22:04:46.704629 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-19 22:04:46.704636 | orchestrator | Monday 19 May 2025 22:03:23 +0000 (0:00:01.135) 0:09:45.860 ************ 2025-05-19 22:04:46.704642 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704648 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704654 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704660 | orchestrator | 2025-05-19 22:04:46.704667 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-19 22:04:46.704673 | orchestrator | Monday 19 May 2025 22:03:24 +0000 (0:00:01.516) 0:09:47.377 ************ 2025-05-19 22:04:46.704679 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704685 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704691 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704697 | orchestrator | 2025-05-19 22:04:46.704703 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-19 22:04:46.704710 | orchestrator | Monday 19 May 2025 22:03:26 +0000 (0:00:01.989) 0:09:49.367 ************ 2025-05-19 22:04:46.704716 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704722 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704728 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704734 | orchestrator | 2025-05-19 22:04:46.704741 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-19 22:04:46.704747 | orchestrator | Monday 19 May 2025 22:03:28 +0000 (0:00:01.933) 0:09:51.300 ************ 2025-05-19 22:04:46.704753 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.704759 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.704765 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.704772 | orchestrator | 2025-05-19 22:04:46.704782 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.704788 | orchestrator | Monday 19 May 2025 22:03:30 +0000 (0:00:01.518) 0:09:52.818 ************ 2025-05-19 22:04:46.704794 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704801 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704810 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704816 | orchestrator | 2025-05-19 22:04:46.704822 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-19 22:04:46.704829 | orchestrator | Monday 19 May 2025 22:03:30 +0000 (0:00:00.714) 0:09:53.532 ************ 2025-05-19 22:04:46.704835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.704841 | orchestrator | 2025-05-19 22:04:46.704847 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-19 22:04:46.704853 | orchestrator | Monday 19 May 2025 22:03:31 +0000 (0:00:00.749) 0:09:54.282 ************ 2025-05-19 22:04:46.704860 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.704866 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.704872 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.704878 | orchestrator | 2025-05-19 22:04:46.704885 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-19 22:04:46.704891 | orchestrator | Monday 19 May 2025 22:03:32 +0000 (0:00:00.357) 0:09:54.639 ************ 2025-05-19 22:04:46.704897 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.704903 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.704909 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.704915 | orchestrator | 2025-05-19 22:04:46.704922 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-19 22:04:46.704928 | orchestrator | Monday 19 May 2025 22:03:33 +0000 (0:00:01.258) 0:09:55.897 ************ 2025-05-19 22:04:46.704934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.704941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.704947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.704953 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.704964 | orchestrator | 2025-05-19 22:04:46.704970 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-19 22:04:46.704976 | orchestrator | Monday 19 May 2025 22:03:34 +0000 (0:00:00.800) 0:09:56.698 ************ 2025-05-19 22:04:46.704982 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.704989 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.704995 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705001 | orchestrator | 2025-05-19 22:04:46.705007 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 22:04:46.705013 | orchestrator | 2025-05-19 22:04:46.705019 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-19 22:04:46.705026 | orchestrator | Monday 19 May 2025 22:03:34 +0000 (0:00:00.747) 0:09:57.446 ************ 2025-05-19 22:04:46.705032 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.705038 | orchestrator | 2025-05-19 22:04:46.705045 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-19 22:04:46.705051 | orchestrator | Monday 19 May 2025 22:03:35 +0000 (0:00:00.466) 0:09:57.913 ************ 2025-05-19 22:04:46.705057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.705063 | orchestrator | 2025-05-19 22:04:46.705069 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-19 22:04:46.705076 | orchestrator | Monday 19 May 2025 22:03:35 +0000 (0:00:00.681) 0:09:58.594 ************ 2025-05-19 22:04:46.705082 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705088 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705093 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705099 | orchestrator | 2025-05-19 22:04:46.705104 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-19 22:04:46.705110 | orchestrator | Monday 19 May 2025 22:03:36 +0000 (0:00:00.291) 0:09:58.886 ************ 2025-05-19 22:04:46.705115 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705120 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705126 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705131 | orchestrator | 2025-05-19 22:04:46.705137 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-19 22:04:46.705142 | orchestrator | Monday 19 May 2025 22:03:36 +0000 (0:00:00.680) 0:09:59.566 ************ 2025-05-19 22:04:46.705148 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705153 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705159 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705164 | orchestrator | 2025-05-19 22:04:46.705169 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-19 22:04:46.705175 | orchestrator | Monday 19 May 2025 22:03:37 +0000 (0:00:00.716) 0:10:00.283 ************ 2025-05-19 22:04:46.705180 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705186 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705191 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705196 | orchestrator | 2025-05-19 22:04:46.705202 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-19 22:04:46.705207 | orchestrator | Monday 19 May 2025 22:03:38 +0000 (0:00:00.954) 0:10:01.237 ************ 2025-05-19 22:04:46.705213 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705218 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705224 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705229 | orchestrator | 2025-05-19 22:04:46.705234 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-19 22:04:46.705240 | orchestrator | Monday 19 May 2025 22:03:38 +0000 (0:00:00.294) 0:10:01.532 ************ 2025-05-19 22:04:46.705245 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705251 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705256 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705262 | orchestrator | 2025-05-19 22:04:46.705271 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-19 22:04:46.705279 | orchestrator | Monday 19 May 2025 22:03:39 +0000 (0:00:00.280) 0:10:01.812 ************ 2025-05-19 22:04:46.705284 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705290 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705295 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705301 | orchestrator | 2025-05-19 22:04:46.705311 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-19 22:04:46.705316 | orchestrator | Monday 19 May 2025 22:03:39 +0000 (0:00:00.291) 0:10:02.104 ************ 2025-05-19 22:04:46.705322 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705327 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705333 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705338 | orchestrator | 2025-05-19 22:04:46.705344 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-19 22:04:46.705349 | orchestrator | Monday 19 May 2025 22:03:40 +0000 (0:00:01.004) 0:10:03.109 ************ 2025-05-19 22:04:46.705355 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705360 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705365 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705371 | orchestrator | 2025-05-19 22:04:46.705376 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-19 22:04:46.705382 | orchestrator | Monday 19 May 2025 22:03:41 +0000 (0:00:00.719) 0:10:03.829 ************ 2025-05-19 22:04:46.705387 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705392 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705398 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705403 | orchestrator | 2025-05-19 22:04:46.705408 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-19 22:04:46.705414 | orchestrator | Monday 19 May 2025 22:03:41 +0000 (0:00:00.288) 0:10:04.117 ************ 2025-05-19 22:04:46.705419 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705425 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705430 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705436 | orchestrator | 2025-05-19 22:04:46.705441 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-19 22:04:46.705447 | orchestrator | Monday 19 May 2025 22:03:41 +0000 (0:00:00.304) 0:10:04.421 ************ 2025-05-19 22:04:46.705452 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705457 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705463 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705468 | orchestrator | 2025-05-19 22:04:46.705474 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-19 22:04:46.705479 | orchestrator | Monday 19 May 2025 22:03:42 +0000 (0:00:00.569) 0:10:04.991 ************ 2025-05-19 22:04:46.705485 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705490 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705495 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705501 | orchestrator | 2025-05-19 22:04:46.705506 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-19 22:04:46.705512 | orchestrator | Monday 19 May 2025 22:03:42 +0000 (0:00:00.328) 0:10:05.319 ************ 2025-05-19 22:04:46.705517 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705523 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705538 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705544 | orchestrator | 2025-05-19 22:04:46.705549 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-19 22:04:46.705555 | orchestrator | Monday 19 May 2025 22:03:43 +0000 (0:00:00.317) 0:10:05.636 ************ 2025-05-19 22:04:46.705560 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705566 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705571 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705577 | orchestrator | 2025-05-19 22:04:46.705582 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-19 22:04:46.705591 | orchestrator | Monday 19 May 2025 22:03:43 +0000 (0:00:00.290) 0:10:05.927 ************ 2025-05-19 22:04:46.705597 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705602 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705608 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705613 | orchestrator | 2025-05-19 22:04:46.705619 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-19 22:04:46.705624 | orchestrator | Monday 19 May 2025 22:03:43 +0000 (0:00:00.515) 0:10:06.443 ************ 2025-05-19 22:04:46.705630 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705635 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705640 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705646 | orchestrator | 2025-05-19 22:04:46.705651 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-19 22:04:46.705657 | orchestrator | Monday 19 May 2025 22:03:44 +0000 (0:00:00.298) 0:10:06.741 ************ 2025-05-19 22:04:46.705662 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705667 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705673 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705678 | orchestrator | 2025-05-19 22:04:46.705684 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-19 22:04:46.705689 | orchestrator | Monday 19 May 2025 22:03:44 +0000 (0:00:00.321) 0:10:07.063 ************ 2025-05-19 22:04:46.705694 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.705700 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.705705 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.705711 | orchestrator | 2025-05-19 22:04:46.705716 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-19 22:04:46.705722 | orchestrator | Monday 19 May 2025 22:03:45 +0000 (0:00:00.729) 0:10:07.793 ************ 2025-05-19 22:04:46.705727 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.705732 | orchestrator | 2025-05-19 22:04:46.705738 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-19 22:04:46.705743 | orchestrator | Monday 19 May 2025 22:03:45 +0000 (0:00:00.530) 0:10:08.323 ************ 2025-05-19 22:04:46.705749 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.705754 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.705759 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.705765 | orchestrator | 2025-05-19 22:04:46.705770 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-19 22:04:46.705778 | orchestrator | Monday 19 May 2025 22:03:47 +0000 (0:00:02.132) 0:10:10.455 ************ 2025-05-19 22:04:46.705784 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:04:46.705790 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-19 22:04:46.705795 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.705804 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:04:46.705809 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-19 22:04:46.705814 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.705820 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:04:46.705825 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-19 22:04:46.705831 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.705836 | orchestrator | 2025-05-19 22:04:46.705841 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-19 22:04:46.705847 | orchestrator | Monday 19 May 2025 22:03:49 +0000 (0:00:01.358) 0:10:11.814 ************ 2025-05-19 22:04:46.705852 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.705858 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.705863 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.705868 | orchestrator | 2025-05-19 22:04:46.705874 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-19 22:04:46.705883 | orchestrator | Monday 19 May 2025 22:03:49 +0000 (0:00:00.310) 0:10:12.124 ************ 2025-05-19 22:04:46.705888 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.705894 | orchestrator | 2025-05-19 22:04:46.705899 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-19 22:04:46.705905 | orchestrator | Monday 19 May 2025 22:03:50 +0000 (0:00:00.508) 0:10:12.633 ************ 2025-05-19 22:04:46.705910 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.705916 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.705921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.705927 | orchestrator | 2025-05-19 22:04:46.705932 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-19 22:04:46.705937 | orchestrator | Monday 19 May 2025 22:03:51 +0000 (0:00:01.219) 0:10:13.852 ************ 2025-05-19 22:04:46.705943 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.705948 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 22:04:46.705954 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.705959 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 22:04:46.705964 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.705970 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-19 22:04:46.705975 | orchestrator | 2025-05-19 22:04:46.705981 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-19 22:04:46.705986 | orchestrator | Monday 19 May 2025 22:03:55 +0000 (0:00:04.273) 0:10:18.126 ************ 2025-05-19 22:04:46.705991 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.705997 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.706002 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.706007 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.706013 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:04:46.706036 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:04:46.706041 | orchestrator | 2025-05-19 22:04:46.706047 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-19 22:04:46.706052 | orchestrator | Monday 19 May 2025 22:03:57 +0000 (0:00:02.152) 0:10:20.279 ************ 2025-05-19 22:04:46.706058 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:04:46.706063 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.706069 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:04:46.706074 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.706079 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:04:46.706085 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.706090 | orchestrator | 2025-05-19 22:04:46.706096 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-19 22:04:46.706101 | orchestrator | Monday 19 May 2025 22:03:58 +0000 (0:00:01.136) 0:10:21.415 ************ 2025-05-19 22:04:46.706107 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-19 22:04:46.706116 | orchestrator | 2025-05-19 22:04:46.706122 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-19 22:04:46.706127 | orchestrator | Monday 19 May 2025 22:03:59 +0000 (0:00:00.192) 0:10:21.608 ************ 2025-05-19 22:04:46.706136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706162 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706167 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706173 | orchestrator | 2025-05-19 22:04:46.706178 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-19 22:04:46.706184 | orchestrator | Monday 19 May 2025 22:03:59 +0000 (0:00:00.653) 0:10:22.261 ************ 2025-05-19 22:04:46.706189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-19 22:04:46.706217 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706222 | orchestrator | 2025-05-19 22:04:46.706227 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-19 22:04:46.706233 | orchestrator | Monday 19 May 2025 22:04:00 +0000 (0:00:00.837) 0:10:23.098 ************ 2025-05-19 22:04:46.706238 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 22:04:46.706244 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 22:04:46.706249 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 22:04:46.706255 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 22:04:46.706260 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-19 22:04:46.706266 | orchestrator | 2025-05-19 22:04:46.706271 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-19 22:04:46.706277 | orchestrator | Monday 19 May 2025 22:04:31 +0000 (0:00:31.486) 0:10:54.585 ************ 2025-05-19 22:04:46.706282 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706288 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.706293 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.706298 | orchestrator | 2025-05-19 22:04:46.706304 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-19 22:04:46.706313 | orchestrator | Monday 19 May 2025 22:04:32 +0000 (0:00:00.330) 0:10:54.915 ************ 2025-05-19 22:04:46.706319 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706324 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.706330 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.706335 | orchestrator | 2025-05-19 22:04:46.706340 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-19 22:04:46.706346 | orchestrator | Monday 19 May 2025 22:04:32 +0000 (0:00:00.316) 0:10:55.232 ************ 2025-05-19 22:04:46.706351 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.706357 | orchestrator | 2025-05-19 22:04:46.706362 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-19 22:04:46.706368 | orchestrator | Monday 19 May 2025 22:04:33 +0000 (0:00:00.741) 0:10:55.974 ************ 2025-05-19 22:04:46.706373 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.706378 | orchestrator | 2025-05-19 22:04:46.706384 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-19 22:04:46.706389 | orchestrator | Monday 19 May 2025 22:04:33 +0000 (0:00:00.522) 0:10:56.497 ************ 2025-05-19 22:04:46.706395 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.706400 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.706406 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.706411 | orchestrator | 2025-05-19 22:04:46.706416 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-19 22:04:46.706422 | orchestrator | Monday 19 May 2025 22:04:35 +0000 (0:00:01.356) 0:10:57.853 ************ 2025-05-19 22:04:46.706430 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.706436 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.706441 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.706447 | orchestrator | 2025-05-19 22:04:46.706452 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-19 22:04:46.706460 | orchestrator | Monday 19 May 2025 22:04:36 +0000 (0:00:01.444) 0:10:59.297 ************ 2025-05-19 22:04:46.706466 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:04:46.706472 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:04:46.706477 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:04:46.706483 | orchestrator | 2025-05-19 22:04:46.706488 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-19 22:04:46.706493 | orchestrator | Monday 19 May 2025 22:04:38 +0000 (0:00:01.714) 0:11:01.012 ************ 2025-05-19 22:04:46.706499 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.706505 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.706510 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-19 22:04:46.706515 | orchestrator | 2025-05-19 22:04:46.706521 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-19 22:04:46.706538 | orchestrator | Monday 19 May 2025 22:04:41 +0000 (0:00:02.617) 0:11:03.629 ************ 2025-05-19 22:04:46.706544 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706549 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.706554 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.706560 | orchestrator | 2025-05-19 22:04:46.706565 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-19 22:04:46.706571 | orchestrator | Monday 19 May 2025 22:04:41 +0000 (0:00:00.350) 0:11:03.979 ************ 2025-05-19 22:04:46.706576 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:04:46.706585 | orchestrator | 2025-05-19 22:04:46.706591 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-19 22:04:46.706596 | orchestrator | Monday 19 May 2025 22:04:41 +0000 (0:00:00.515) 0:11:04.495 ************ 2025-05-19 22:04:46.706602 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.706607 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.706612 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.706618 | orchestrator | 2025-05-19 22:04:46.706623 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-19 22:04:46.706629 | orchestrator | Monday 19 May 2025 22:04:42 +0000 (0:00:00.606) 0:11:05.102 ************ 2025-05-19 22:04:46.706634 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706639 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:04:46.706645 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:04:46.706650 | orchestrator | 2025-05-19 22:04:46.706655 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-19 22:04:46.706661 | orchestrator | Monday 19 May 2025 22:04:42 +0000 (0:00:00.371) 0:11:05.473 ************ 2025-05-19 22:04:46.706666 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:04:46.706672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:04:46.706681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:04:46.706690 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:04:46.706699 | orchestrator | 2025-05-19 22:04:46.706707 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-19 22:04:46.706715 | orchestrator | Monday 19 May 2025 22:04:43 +0000 (0:00:00.659) 0:11:06.133 ************ 2025-05-19 22:04:46.706725 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:04:46.706733 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:04:46.706743 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:04:46.706749 | orchestrator | 2025-05-19 22:04:46.706754 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:04:46.706760 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-19 22:04:46.706766 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-19 22:04:46.706771 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-19 22:04:46.706777 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-19 22:04:46.706782 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-19 22:04:46.706790 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-19 22:04:46.706798 | orchestrator | 2025-05-19 22:04:46.706807 | orchestrator | 2025-05-19 22:04:46.706815 | orchestrator | 2025-05-19 22:04:46.706824 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:04:46.706832 | orchestrator | Monday 19 May 2025 22:04:43 +0000 (0:00:00.245) 0:11:06.378 ************ 2025-05-19 22:04:46.706841 | orchestrator | =============================================================================== 2025-05-19 22:04:46.706851 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 74.69s 2025-05-19 22:04:46.706861 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.50s 2025-05-19 22:04:46.706867 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.49s 2025-05-19 22:04:46.706876 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.21s 2025-05-19 22:04:46.706887 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.87s 2025-05-19 22:04:46.706892 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.03s 2025-05-19 22:04:46.706898 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2025-05-19 22:04:46.706903 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.49s 2025-05-19 22:04:46.706908 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.11s 2025-05-19 22:04:46.706914 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.20s 2025-05-19 22:04:46.706919 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.42s 2025-05-19 22:04:46.706924 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.35s 2025-05-19 22:04:46.706930 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.65s 2025-05-19 22:04:46.706935 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.45s 2025-05-19 22:04:46.706941 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.27s 2025-05-19 22:04:46.706946 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.83s 2025-05-19 22:04:46.706951 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2025-05-19 22:04:46.706957 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.58s 2025-05-19 22:04:46.706962 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.52s 2025-05-19 22:04:46.706968 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2025-05-19 22:04:46.706973 | orchestrator | 2025-05-19 22:04:46 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:04:46.706979 | orchestrator | 2025-05-19 22:04:46 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:46.706984 | orchestrator | 2025-05-19 22:04:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:49.740894 | orchestrator | 2025-05-19 22:04:49 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:49.744317 | orchestrator | 2025-05-19 22:04:49 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:04:49.745553 | orchestrator | 2025-05-19 22:04:49 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:49.745989 | orchestrator | 2025-05-19 22:04:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:52.797247 | orchestrator | 2025-05-19 22:04:52 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:52.798987 | orchestrator | 2025-05-19 22:04:52 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:04:52.801789 | orchestrator | 2025-05-19 22:04:52 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:52.801829 | orchestrator | 2025-05-19 22:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:55.858830 | orchestrator | 2025-05-19 22:04:55 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:55.860269 | orchestrator | 2025-05-19 22:04:55 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:04:55.863490 | orchestrator | 2025-05-19 22:04:55 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:55.863584 | orchestrator | 2025-05-19 22:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:04:58.917062 | orchestrator | 2025-05-19 22:04:58 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:04:58.919021 | orchestrator | 2025-05-19 22:04:58 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:04:58.921264 | orchestrator | 2025-05-19 22:04:58 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:04:58.921310 | orchestrator | 2025-05-19 22:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:01.976654 | orchestrator | 2025-05-19 22:05:01 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:01.978155 | orchestrator | 2025-05-19 22:05:01 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:01.980198 | orchestrator | 2025-05-19 22:05:01 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:01.980232 | orchestrator | 2025-05-19 22:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:05.036147 | orchestrator | 2025-05-19 22:05:05 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:05.038705 | orchestrator | 2025-05-19 22:05:05 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:05.038976 | orchestrator | 2025-05-19 22:05:05 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:05.039125 | orchestrator | 2025-05-19 22:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:08.092910 | orchestrator | 2025-05-19 22:05:08 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:08.095051 | orchestrator | 2025-05-19 22:05:08 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:08.098104 | orchestrator | 2025-05-19 22:05:08 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:08.098422 | orchestrator | 2025-05-19 22:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:11.161098 | orchestrator | 2025-05-19 22:05:11 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:11.161204 | orchestrator | 2025-05-19 22:05:11 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:11.162665 | orchestrator | 2025-05-19 22:05:11 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:11.162697 | orchestrator | 2025-05-19 22:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:14.209228 | orchestrator | 2025-05-19 22:05:14 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:14.210691 | orchestrator | 2025-05-19 22:05:14 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:14.212567 | orchestrator | 2025-05-19 22:05:14 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:14.212697 | orchestrator | 2025-05-19 22:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:17.254578 | orchestrator | 2025-05-19 22:05:17 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:17.255486 | orchestrator | 2025-05-19 22:05:17 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:17.258067 | orchestrator | 2025-05-19 22:05:17 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:17.258303 | orchestrator | 2025-05-19 22:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:20.303774 | orchestrator | 2025-05-19 22:05:20 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:20.304282 | orchestrator | 2025-05-19 22:05:20 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:20.305856 | orchestrator | 2025-05-19 22:05:20 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:20.306108 | orchestrator | 2025-05-19 22:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:23.355142 | orchestrator | 2025-05-19 22:05:23 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:23.357157 | orchestrator | 2025-05-19 22:05:23 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:23.359450 | orchestrator | 2025-05-19 22:05:23 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:23.359620 | orchestrator | 2025-05-19 22:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:26.408071 | orchestrator | 2025-05-19 22:05:26 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:26.408185 | orchestrator | 2025-05-19 22:05:26 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:26.409278 | orchestrator | 2025-05-19 22:05:26 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:26.409304 | orchestrator | 2025-05-19 22:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:29.459080 | orchestrator | 2025-05-19 22:05:29 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:29.460811 | orchestrator | 2025-05-19 22:05:29 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:29.462281 | orchestrator | 2025-05-19 22:05:29 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:29.462312 | orchestrator | 2025-05-19 22:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:32.505519 | orchestrator | 2025-05-19 22:05:32 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:32.505608 | orchestrator | 2025-05-19 22:05:32 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:32.506052 | orchestrator | 2025-05-19 22:05:32 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:32.506225 | orchestrator | 2025-05-19 22:05:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:35.553141 | orchestrator | 2025-05-19 22:05:35 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:35.554955 | orchestrator | 2025-05-19 22:05:35 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:35.558069 | orchestrator | 2025-05-19 22:05:35 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:35.558419 | orchestrator | 2025-05-19 22:05:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:38.599524 | orchestrator | 2025-05-19 22:05:38 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:38.600588 | orchestrator | 2025-05-19 22:05:38 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:38.601698 | orchestrator | 2025-05-19 22:05:38 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:38.601866 | orchestrator | 2025-05-19 22:05:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:41.643869 | orchestrator | 2025-05-19 22:05:41 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:41.645281 | orchestrator | 2025-05-19 22:05:41 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:41.646879 | orchestrator | 2025-05-19 22:05:41 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:41.646988 | orchestrator | 2025-05-19 22:05:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:44.696860 | orchestrator | 2025-05-19 22:05:44 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:44.698215 | orchestrator | 2025-05-19 22:05:44 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:44.700155 | orchestrator | 2025-05-19 22:05:44 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:44.700712 | orchestrator | 2025-05-19 22:05:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:47.741516 | orchestrator | 2025-05-19 22:05:47 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:47.742352 | orchestrator | 2025-05-19 22:05:47 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:47.744479 | orchestrator | 2025-05-19 22:05:47 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state STARTED 2025-05-19 22:05:47.744615 | orchestrator | 2025-05-19 22:05:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:50.785662 | orchestrator | 2025-05-19 22:05:50 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:50.786666 | orchestrator | 2025-05-19 22:05:50 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:50.788084 | orchestrator | 2025-05-19 22:05:50 | INFO  | Task 59c722d4-9154-4a7b-8ca3-c2e614aee5da is in state SUCCESS 2025-05-19 22:05:50.788287 | orchestrator | 2025-05-19 22:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:50.789988 | orchestrator | 2025-05-19 22:05:50.790067 | orchestrator | 2025-05-19 22:05:50.790084 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:05:50.790097 | orchestrator | 2025-05-19 22:05:50.790108 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:05:50.790122 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.267) 0:00:00.267 ************ 2025-05-19 22:05:50.790141 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:50.790160 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:50.790178 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:50.790194 | orchestrator | 2025-05-19 22:05:50.790211 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:05:50.790228 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.293) 0:00:00.560 ************ 2025-05-19 22:05:50.790261 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-19 22:05:50.790294 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-19 22:05:50.790328 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-19 22:05:50.790383 | orchestrator | 2025-05-19 22:05:50.790401 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-19 22:05:50.790420 | orchestrator | 2025-05-19 22:05:50.790454 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 22:05:50.790491 | orchestrator | Monday 19 May 2025 22:02:56 +0000 (0:00:00.435) 0:00:00.995 ************ 2025-05-19 22:05:50.790578 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:50.790614 | orchestrator | 2025-05-19 22:05:50.790649 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-19 22:05:50.790684 | orchestrator | Monday 19 May 2025 22:02:56 +0000 (0:00:00.472) 0:00:01.468 ************ 2025-05-19 22:05:50.790719 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 22:05:50.790753 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 22:05:50.790784 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-19 22:05:50.790846 | orchestrator | 2025-05-19 22:05:50.790873 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-19 22:05:50.790899 | orchestrator | Monday 19 May 2025 22:02:57 +0000 (0:00:00.642) 0:00:02.111 ************ 2025-05-19 22:05:50.790941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.790975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.791035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.791082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791184 | orchestrator | 2025-05-19 22:05:50.791201 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 22:05:50.791218 | orchestrator | Monday 19 May 2025 22:02:59 +0000 (0:00:01.626) 0:00:03.737 ************ 2025-05-19 22:05:50.791236 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:50.791254 | orchestrator | 2025-05-19 22:05:50.791270 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-19 22:05:50.791288 | orchestrator | Monday 19 May 2025 22:02:59 +0000 (0:00:00.530) 0:00:04.268 ************ 2025-05-19 22:05:50.791324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.791391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.791420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.791433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.791487 | orchestrator | 2025-05-19 22:05:50.791499 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-19 22:05:50.791510 | orchestrator | Monday 19 May 2025 22:03:02 +0000 (0:00:02.992) 0:00:07.260 ************ 2025-05-19 22:05:50.791521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791546 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:50.791558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791721 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:50.791734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791758 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:50.791769 | orchestrator | 2025-05-19 22:05:50.791780 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-19 22:05:50.791792 | orchestrator | Monday 19 May 2025 22:03:04 +0000 (0:00:01.411) 0:00:08.672 ************ 2025-05-19 22:05:50.791803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791846 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:50.791863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791888 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:50.791900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-19 22:05:50.791919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-19 22:05:50.791938 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:50.791950 | orchestrator | 2025-05-19 22:05:50.791961 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-19 22:05:50.791972 | orchestrator | Monday 19 May 2025 22:03:04 +0000 (0:00:00.645) 0:00:09.317 ************ 2025-05-19 22:05:50.791989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792082 | orchestrator | 2025-05-19 22:05:50.792093 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-19 22:05:50.792104 | orchestrator | Monday 19 May 2025 22:03:06 +0000 (0:00:02.163) 0:00:11.481 ************ 2025-05-19 22:05:50.792115 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:50.792126 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:50.792137 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.792148 | orchestrator | 2025-05-19 22:05:50.792159 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-19 22:05:50.792170 | orchestrator | Monday 19 May 2025 22:03:09 +0000 (0:00:02.801) 0:00:14.282 ************ 2025-05-19 22:05:50.792181 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.792192 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:50.792203 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:50.792214 | orchestrator | 2025-05-19 22:05:50.792225 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-19 22:05:50.792236 | orchestrator | Monday 19 May 2025 22:03:11 +0000 (0:00:01.439) 0:00:15.722 ************ 2025-05-19 22:05:50.792248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-19 22:05:50.792305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-19 22:05:50.792401 | orchestrator | 2025-05-19 22:05:50.792412 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 22:05:50.792424 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:02.013) 0:00:17.735 ************ 2025-05-19 22:05:50.792435 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:50.792446 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:50.792457 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:50.792468 | orchestrator | 2025-05-19 22:05:50.792479 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 22:05:50.792495 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:00.313) 0:00:18.049 ************ 2025-05-19 22:05:50.792506 | orchestrator | 2025-05-19 22:05:50.792517 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 22:05:50.792528 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:00.076) 0:00:18.126 ************ 2025-05-19 22:05:50.792539 | orchestrator | 2025-05-19 22:05:50.792550 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-19 22:05:50.792561 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:00.060) 0:00:18.186 ************ 2025-05-19 22:05:50.792571 | orchestrator | 2025-05-19 22:05:50.792582 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-19 22:05:50.792593 | orchestrator | Monday 19 May 2025 22:03:13 +0000 (0:00:00.296) 0:00:18.483 ************ 2025-05-19 22:05:50.792604 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:50.792615 | orchestrator | 2025-05-19 22:05:50.792626 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-19 22:05:50.792637 | orchestrator | Monday 19 May 2025 22:03:14 +0000 (0:00:00.215) 0:00:18.698 ************ 2025-05-19 22:05:50.792648 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:50.792659 | orchestrator | 2025-05-19 22:05:50.792670 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-19 22:05:50.792681 | orchestrator | Monday 19 May 2025 22:03:14 +0000 (0:00:00.214) 0:00:18.912 ************ 2025-05-19 22:05:50.792692 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.792702 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:50.792713 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:50.792724 | orchestrator | 2025-05-19 22:05:50.792735 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-19 22:05:50.792746 | orchestrator | Monday 19 May 2025 22:04:23 +0000 (0:01:09.122) 0:01:28.035 ************ 2025-05-19 22:05:50.792757 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.792768 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:50.792786 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:50.792797 | orchestrator | 2025-05-19 22:05:50.792808 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-19 22:05:50.792819 | orchestrator | Monday 19 May 2025 22:05:39 +0000 (0:01:15.646) 0:02:43.681 ************ 2025-05-19 22:05:50.792831 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:50.792842 | orchestrator | 2025-05-19 22:05:50.792852 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-19 22:05:50.792863 | orchestrator | Monday 19 May 2025 22:05:39 +0000 (0:00:00.536) 0:02:44.218 ************ 2025-05-19 22:05:50.792875 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:50.792886 | orchestrator | 2025-05-19 22:05:50.792897 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-19 22:05:50.792908 | orchestrator | Monday 19 May 2025 22:05:41 +0000 (0:00:02.193) 0:02:46.411 ************ 2025-05-19 22:05:50.792919 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:50.792930 | orchestrator | 2025-05-19 22:05:50.792941 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-19 22:05:50.792952 | orchestrator | Monday 19 May 2025 22:05:43 +0000 (0:00:02.112) 0:02:48.524 ************ 2025-05-19 22:05:50.792963 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.792974 | orchestrator | 2025-05-19 22:05:50.792984 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-19 22:05:50.792996 | orchestrator | Monday 19 May 2025 22:05:46 +0000 (0:00:02.537) 0:02:51.061 ************ 2025-05-19 22:05:50.793007 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:50.793018 | orchestrator | 2025-05-19 22:05:50.793029 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:05:50.793041 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:05:50.793054 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:05:50.793065 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:05:50.793076 | orchestrator | 2025-05-19 22:05:50.793087 | orchestrator | 2025-05-19 22:05:50.793098 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:05:50.793115 | orchestrator | Monday 19 May 2025 22:05:48 +0000 (0:00:02.463) 0:02:53.525 ************ 2025-05-19 22:05:50.793127 | orchestrator | =============================================================================== 2025-05-19 22:05:50.793138 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.65s 2025-05-19 22:05:50.793149 | orchestrator | opensearch : Restart opensearch container ------------------------------ 69.12s 2025-05-19 22:05:50.793160 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.99s 2025-05-19 22:05:50.793171 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.80s 2025-05-19 22:05:50.793182 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.54s 2025-05-19 22:05:50.793193 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.46s 2025-05-19 22:05:50.793203 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.19s 2025-05-19 22:05:50.793214 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.16s 2025-05-19 22:05:50.793225 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.11s 2025-05-19 22:05:50.793236 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.01s 2025-05-19 22:05:50.793255 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.63s 2025-05-19 22:05:50.793266 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.44s 2025-05-19 22:05:50.793297 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.41s 2025-05-19 22:05:50.793319 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.65s 2025-05-19 22:05:50.793337 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-05-19 22:05:50.793423 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-05-19 22:05:50.793456 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-05-19 22:05:50.793489 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-05-19 22:05:50.793522 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-05-19 22:05:50.793554 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.43s 2025-05-19 22:05:53.831844 | orchestrator | 2025-05-19 22:05:53 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:53.833195 | orchestrator | 2025-05-19 22:05:53 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:53.833278 | orchestrator | 2025-05-19 22:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:56.905179 | orchestrator | 2025-05-19 22:05:56 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state STARTED 2025-05-19 22:05:56.906809 | orchestrator | 2025-05-19 22:05:56 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:56.906937 | orchestrator | 2025-05-19 22:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:05:59.965840 | orchestrator | 2025-05-19 22:05:59.965966 | orchestrator | 2025-05-19 22:05:59.965983 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-19 22:05:59.965996 | orchestrator | 2025-05-19 22:05:59.966007 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-19 22:05:59.966077 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.104) 0:00:00.104 ************ 2025-05-19 22:05:59.966417 | orchestrator | ok: [localhost] => { 2025-05-19 22:05:59.966444 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-19 22:05:59.966456 | orchestrator | } 2025-05-19 22:05:59.966468 | orchestrator | 2025-05-19 22:05:59.966479 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-19 22:05:59.966490 | orchestrator | Monday 19 May 2025 22:02:55 +0000 (0:00:00.039) 0:00:00.143 ************ 2025-05-19 22:05:59.966502 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-19 22:05:59.966569 | orchestrator | ...ignoring 2025-05-19 22:05:59.966581 | orchestrator | 2025-05-19 22:05:59.966593 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-19 22:05:59.966604 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:02.826) 0:00:02.969 ************ 2025-05-19 22:05:59.966615 | orchestrator | skipping: [localhost] 2025-05-19 22:05:59.966626 | orchestrator | 2025-05-19 22:05:59.966637 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-19 22:05:59.966648 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:00.059) 0:00:03.029 ************ 2025-05-19 22:05:59.966659 | orchestrator | ok: [localhost] 2025-05-19 22:05:59.966669 | orchestrator | 2025-05-19 22:05:59.966680 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:05:59.966691 | orchestrator | 2025-05-19 22:05:59.966702 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:05:59.966713 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:00.174) 0:00:03.203 ************ 2025-05-19 22:05:59.966724 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.966735 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.966745 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.966784 | orchestrator | 2025-05-19 22:05:59.966796 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:05:59.966807 | orchestrator | Monday 19 May 2025 22:02:58 +0000 (0:00:00.291) 0:00:03.495 ************ 2025-05-19 22:05:59.966818 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 22:05:59.966830 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 22:05:59.966840 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 22:05:59.966851 | orchestrator | 2025-05-19 22:05:59.966863 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 22:05:59.966874 | orchestrator | 2025-05-19 22:05:59.966885 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 22:05:59.966896 | orchestrator | Monday 19 May 2025 22:02:59 +0000 (0:00:00.564) 0:00:04.060 ************ 2025-05-19 22:05:59.966907 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:05:59.967017 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 22:05:59.967029 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 22:05:59.967040 | orchestrator | 2025-05-19 22:05:59.967051 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:05:59.967062 | orchestrator | Monday 19 May 2025 22:02:59 +0000 (0:00:00.424) 0:00:04.484 ************ 2025-05-19 22:05:59.967074 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:59.967086 | orchestrator | 2025-05-19 22:05:59.967097 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-19 22:05:59.967123 | orchestrator | Monday 19 May 2025 22:03:00 +0000 (0:00:00.756) 0:00:05.241 ************ 2025-05-19 22:05:59.967163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967223 | orchestrator | 2025-05-19 22:05:59.967243 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-19 22:05:59.967256 | orchestrator | Monday 19 May 2025 22:03:04 +0000 (0:00:03.882) 0:00:09.123 ************ 2025-05-19 22:05:59.967267 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.967279 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.967290 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.967301 | orchestrator | 2025-05-19 22:05:59.967312 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-19 22:05:59.967323 | orchestrator | Monday 19 May 2025 22:03:05 +0000 (0:00:00.524) 0:00:09.647 ************ 2025-05-19 22:05:59.967366 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.967377 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.967396 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.967407 | orchestrator | 2025-05-19 22:05:59.967418 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-19 22:05:59.967429 | orchestrator | Monday 19 May 2025 22:03:06 +0000 (0:00:01.306) 0:00:10.953 ************ 2025-05-19 22:05:59.967441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.967502 | orchestrator | 2025-05-19 22:05:59.967514 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-19 22:05:59.967525 | orchestrator | Monday 19 May 2025 22:03:09 +0000 (0:00:03.178) 0:00:14.132 ************ 2025-05-19 22:05:59.967536 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.967547 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.967558 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.967569 | orchestrator | 2025-05-19 22:05:59.967581 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-19 22:05:59.967592 | orchestrator | Monday 19 May 2025 22:03:10 +0000 (0:00:01.001) 0:00:15.134 ************ 2025-05-19 22:05:59.967603 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.967614 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:59.967630 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:59.967650 | orchestrator | 2025-05-19 22:05:59.967670 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:05:59.967693 | orchestrator | Monday 19 May 2025 22:03:14 +0000 (0:00:03.651) 0:00:18.786 ************ 2025-05-19 22:05:59.967707 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:59.967720 | orchestrator | 2025-05-19 22:05:59.967731 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-19 22:05:59.967742 | orchestrator | Monday 19 May 2025 22:03:14 +0000 (0:00:00.691) 0:00:19.477 ************ 2025-05-19 22:05:59.967764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967785 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.967798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967810 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.967835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967856 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.967867 | orchestrator | 2025-05-19 22:05:59.967878 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-19 22:05:59.967889 | orchestrator | Monday 19 May 2025 22:03:17 +0000 (0:00:02.887) 0:00:22.365 ************ 2025-05-19 22:05:59.967901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967913 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.967935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967955 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.967967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.967979 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.967990 | orchestrator | 2025-05-19 22:05:59.968001 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-19 22:05:59.968012 | orchestrator | Monday 19 May 2025 22:03:19 +0000 (0:00:02.145) 0:00:24.510 ************ 2025-05-19 22:05:59.968029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.968054 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.968074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.968087 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.968104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-19 22:05:59.968123 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.968134 | orchestrator | 2025-05-19 22:05:59.968145 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-19 22:05:59.968156 | orchestrator | Monday 19 May 2025 22:03:22 +0000 (0:00:02.692) 0:00:27.203 ************ 2025-05-19 22:05:59.968174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-05-19 22:05:59 | INFO  | Task ed398f5c-0b5a-4d4c-a137-87c0efc65047 is in state SUCCESS 2025-05-19 22:05:59.968188 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.968206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.968233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-19 22:05:59.968246 | orchestrator | 2025-05-19 22:05:59.968257 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-19 22:05:59.968268 | orchestrator | Monday 19 May 2025 22:03:25 +0000 (0:00:02.866) 0:00:30.070 ************ 2025-05-19 22:05:59.968280 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.968290 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:59.968301 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:59.968312 | orchestrator | 2025-05-19 22:05:59.968323 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-19 22:05:59.968372 | orchestrator | Monday 19 May 2025 22:03:26 +0000 (0:00:01.037) 0:00:31.107 ************ 2025-05-19 22:05:59.968383 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.968394 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.968405 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.968416 | orchestrator | 2025-05-19 22:05:59.968427 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-19 22:05:59.968438 | orchestrator | Monday 19 May 2025 22:03:26 +0000 (0:00:00.301) 0:00:31.409 ************ 2025-05-19 22:05:59.968449 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.968460 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.968471 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.968482 | orchestrator | 2025-05-19 22:05:59.968493 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-19 22:05:59.968504 | orchestrator | Monday 19 May 2025 22:03:27 +0000 (0:00:00.343) 0:00:31.753 ************ 2025-05-19 22:05:59.968515 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-19 22:05:59.968534 | orchestrator | ...ignoring 2025-05-19 22:05:59.968546 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-19 22:05:59.968557 | orchestrator | ...ignoring 2025-05-19 22:05:59.968567 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-19 22:05:59.968578 | orchestrator | ...ignoring 2025-05-19 22:05:59.968589 | orchestrator | 2025-05-19 22:05:59.968605 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-19 22:05:59.968617 | orchestrator | Monday 19 May 2025 22:03:37 +0000 (0:00:10.871) 0:00:42.624 ************ 2025-05-19 22:05:59.968628 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.968639 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.968650 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.968661 | orchestrator | 2025-05-19 22:05:59.968671 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-19 22:05:59.968683 | orchestrator | Monday 19 May 2025 22:03:38 +0000 (0:00:00.642) 0:00:43.266 ************ 2025-05-19 22:05:59.968694 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.968704 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.968826 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.968837 | orchestrator | 2025-05-19 22:05:59.968848 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-19 22:05:59.968859 | orchestrator | Monday 19 May 2025 22:03:39 +0000 (0:00:00.394) 0:00:43.661 ************ 2025-05-19 22:05:59.968870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.968881 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.968892 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.968903 | orchestrator | 2025-05-19 22:05:59.968914 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-19 22:05:59.968925 | orchestrator | Monday 19 May 2025 22:03:39 +0000 (0:00:00.380) 0:00:44.041 ************ 2025-05-19 22:05:59.968935 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.968946 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.968957 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.968968 | orchestrator | 2025-05-19 22:05:59.968979 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-19 22:05:59.968990 | orchestrator | Monday 19 May 2025 22:03:39 +0000 (0:00:00.396) 0:00:44.438 ************ 2025-05-19 22:05:59.969001 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.969011 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.969022 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.969033 | orchestrator | 2025-05-19 22:05:59.969122 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-19 22:05:59.969139 | orchestrator | Monday 19 May 2025 22:03:40 +0000 (0:00:00.625) 0:00:45.063 ************ 2025-05-19 22:05:59.969150 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.969161 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.969172 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.969183 | orchestrator | 2025-05-19 22:05:59.969194 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:05:59.969205 | orchestrator | Monday 19 May 2025 22:03:40 +0000 (0:00:00.432) 0:00:45.495 ************ 2025-05-19 22:05:59.969216 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.969227 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.969238 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-19 22:05:59.969249 | orchestrator | 2025-05-19 22:05:59.969260 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-19 22:05:59.969271 | orchestrator | Monday 19 May 2025 22:03:41 +0000 (0:00:00.361) 0:00:45.857 ************ 2025-05-19 22:05:59.969282 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.969293 | orchestrator | 2025-05-19 22:05:59.969311 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-19 22:05:59.969322 | orchestrator | Monday 19 May 2025 22:03:51 +0000 (0:00:10.122) 0:00:55.980 ************ 2025-05-19 22:05:59.969434 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.969452 | orchestrator | 2025-05-19 22:05:59.969463 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:05:59.969474 | orchestrator | Monday 19 May 2025 22:03:51 +0000 (0:00:00.125) 0:00:56.106 ************ 2025-05-19 22:05:59.969485 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.969496 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.969506 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.969517 | orchestrator | 2025-05-19 22:05:59.969528 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-19 22:05:59.969539 | orchestrator | Monday 19 May 2025 22:03:52 +0000 (0:00:00.937) 0:00:57.043 ************ 2025-05-19 22:05:59.969550 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.969560 | orchestrator | 2025-05-19 22:05:59.969571 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-19 22:05:59.969582 | orchestrator | Monday 19 May 2025 22:03:59 +0000 (0:00:07.327) 0:01:04.371 ************ 2025-05-19 22:05:59.969593 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.969604 | orchestrator | 2025-05-19 22:05:59.969614 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-19 22:05:59.969625 | orchestrator | Monday 19 May 2025 22:04:01 +0000 (0:00:01.617) 0:01:05.988 ************ 2025-05-19 22:05:59.969636 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.969647 | orchestrator | 2025-05-19 22:05:59.969657 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-19 22:05:59.969667 | orchestrator | Monday 19 May 2025 22:04:03 +0000 (0:00:02.414) 0:01:08.403 ************ 2025-05-19 22:05:59.969676 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.969686 | orchestrator | 2025-05-19 22:05:59.969695 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-19 22:05:59.969705 | orchestrator | Monday 19 May 2025 22:04:03 +0000 (0:00:00.126) 0:01:08.530 ************ 2025-05-19 22:05:59.969715 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.969724 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.969734 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.969743 | orchestrator | 2025-05-19 22:05:59.969754 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-19 22:05:59.969765 | orchestrator | Monday 19 May 2025 22:04:04 +0000 (0:00:00.529) 0:01:09.060 ************ 2025-05-19 22:05:59.969776 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.969787 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 22:05:59.969798 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:59.969809 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:59.969819 | orchestrator | 2025-05-19 22:05:59.969837 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 22:05:59.969848 | orchestrator | skipping: no hosts matched 2025-05-19 22:05:59.969858 | orchestrator | 2025-05-19 22:05:59.969869 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 22:05:59.969880 | orchestrator | 2025-05-19 22:05:59.969890 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 22:05:59.969901 | orchestrator | Monday 19 May 2025 22:04:04 +0000 (0:00:00.320) 0:01:09.380 ************ 2025-05-19 22:05:59.969912 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:05:59.969923 | orchestrator | 2025-05-19 22:05:59.969934 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 22:05:59.969945 | orchestrator | Monday 19 May 2025 22:04:22 +0000 (0:00:18.162) 0:01:27.543 ************ 2025-05-19 22:05:59.969956 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.969967 | orchestrator | 2025-05-19 22:05:59.969978 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 22:05:59.969997 | orchestrator | Monday 19 May 2025 22:04:43 +0000 (0:00:20.567) 0:01:48.110 ************ 2025-05-19 22:05:59.970009 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.970062 | orchestrator | 2025-05-19 22:05:59.970073 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 22:05:59.970085 | orchestrator | 2025-05-19 22:05:59.970096 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 22:05:59.970108 | orchestrator | Monday 19 May 2025 22:04:46 +0000 (0:00:02.619) 0:01:50.730 ************ 2025-05-19 22:05:59.970117 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:05:59.970127 | orchestrator | 2025-05-19 22:05:59.970137 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 22:05:59.970147 | orchestrator | Monday 19 May 2025 22:05:05 +0000 (0:00:19.715) 0:02:10.445 ************ 2025-05-19 22:05:59.970156 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.970166 | orchestrator | 2025-05-19 22:05:59.970176 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 22:05:59.970186 | orchestrator | Monday 19 May 2025 22:05:26 +0000 (0:00:20.570) 0:02:31.016 ************ 2025-05-19 22:05:59.970204 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.970214 | orchestrator | 2025-05-19 22:05:59.970224 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 22:05:59.970234 | orchestrator | 2025-05-19 22:05:59.970244 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-19 22:05:59.970253 | orchestrator | Monday 19 May 2025 22:05:29 +0000 (0:00:02.628) 0:02:33.645 ************ 2025-05-19 22:05:59.970263 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.970273 | orchestrator | 2025-05-19 22:05:59.970283 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-19 22:05:59.970293 | orchestrator | Monday 19 May 2025 22:05:38 +0000 (0:00:09.545) 0:02:43.191 ************ 2025-05-19 22:05:59.970303 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.970313 | orchestrator | 2025-05-19 22:05:59.970322 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-19 22:05:59.970354 | orchestrator | Monday 19 May 2025 22:05:43 +0000 (0:00:04.554) 0:02:47.745 ************ 2025-05-19 22:05:59.970365 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.970375 | orchestrator | 2025-05-19 22:05:59.970385 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 22:05:59.970395 | orchestrator | 2025-05-19 22:05:59.970405 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 22:05:59.970414 | orchestrator | Monday 19 May 2025 22:05:45 +0000 (0:00:02.393) 0:02:50.139 ************ 2025-05-19 22:05:59.970424 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:05:59.970434 | orchestrator | 2025-05-19 22:05:59.970444 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-19 22:05:59.970453 | orchestrator | Monday 19 May 2025 22:05:46 +0000 (0:00:00.521) 0:02:50.661 ************ 2025-05-19 22:05:59.970463 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.970473 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.970483 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.970492 | orchestrator | 2025-05-19 22:05:59.970502 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-19 22:05:59.970512 | orchestrator | Monday 19 May 2025 22:05:48 +0000 (0:00:02.264) 0:02:52.925 ************ 2025-05-19 22:05:59.970522 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.970532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.970541 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.970551 | orchestrator | 2025-05-19 22:05:59.970561 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-19 22:05:59.970571 | orchestrator | Monday 19 May 2025 22:05:50 +0000 (0:00:02.028) 0:02:54.953 ************ 2025-05-19 22:05:59.970581 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.970597 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.970607 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.970617 | orchestrator | 2025-05-19 22:05:59.970627 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-19 22:05:59.970637 | orchestrator | Monday 19 May 2025 22:05:52 +0000 (0:00:01.966) 0:02:56.920 ************ 2025-05-19 22:05:59.970647 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.970656 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.970666 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:05:59.970676 | orchestrator | 2025-05-19 22:05:59.970685 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-19 22:05:59.970695 | orchestrator | Monday 19 May 2025 22:05:54 +0000 (0:00:01.999) 0:02:58.919 ************ 2025-05-19 22:05:59.970705 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:05:59.970715 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:05:59.970724 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:05:59.970734 | orchestrator | 2025-05-19 22:05:59.970743 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 22:05:59.970753 | orchestrator | Monday 19 May 2025 22:05:57 +0000 (0:00:02.845) 0:03:01.765 ************ 2025-05-19 22:05:59.970763 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:05:59.970773 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:05:59.970782 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:05:59.970792 | orchestrator | 2025-05-19 22:05:59.970807 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:05:59.970817 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-19 22:05:59.970828 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-19 22:05:59.970840 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-19 22:05:59.970849 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-19 22:05:59.970859 | orchestrator | 2025-05-19 22:05:59.970869 | orchestrator | 2025-05-19 22:05:59.970879 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:05:59.970889 | orchestrator | Monday 19 May 2025 22:05:57 +0000 (0:00:00.204) 0:03:01.969 ************ 2025-05-19 22:05:59.970899 | orchestrator | =============================================================================== 2025-05-19 22:05:59.970909 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.14s 2025-05-19 22:05:59.970919 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.88s 2025-05-19 22:05:59.970928 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2025-05-19 22:05:59.970938 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.12s 2025-05-19 22:05:59.970948 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.55s 2025-05-19 22:05:59.970964 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.33s 2025-05-19 22:05:59.970974 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.25s 2025-05-19 22:05:59.970984 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.55s 2025-05-19 22:05:59.970993 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.88s 2025-05-19 22:05:59.971003 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.65s 2025-05-19 22:05:59.971012 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.18s 2025-05-19 22:05:59.971022 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.89s 2025-05-19 22:05:59.971041 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.87s 2025-05-19 22:05:59.971051 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.85s 2025-05-19 22:05:59.971060 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-05-19 22:05:59.971070 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.69s 2025-05-19 22:05:59.971080 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.41s 2025-05-19 22:05:59.971090 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.39s 2025-05-19 22:05:59.971099 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.26s 2025-05-19 22:05:59.971109 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.15s 2025-05-19 22:05:59.971119 | orchestrator | 2025-05-19 22:05:59 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:05:59.971129 | orchestrator | 2025-05-19 22:05:59 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:05:59.971139 | orchestrator | 2025-05-19 22:05:59 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:05:59.971149 | orchestrator | 2025-05-19 22:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:03.017941 | orchestrator | 2025-05-19 22:06:03 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:03.020185 | orchestrator | 2025-05-19 22:06:03 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:03.021464 | orchestrator | 2025-05-19 22:06:03 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:03.021689 | orchestrator | 2025-05-19 22:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:06.075131 | orchestrator | 2025-05-19 22:06:06 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:06.076047 | orchestrator | 2025-05-19 22:06:06 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:06.077774 | orchestrator | 2025-05-19 22:06:06 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:06.077822 | orchestrator | 2025-05-19 22:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:09.122520 | orchestrator | 2025-05-19 22:06:09 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:09.122658 | orchestrator | 2025-05-19 22:06:09 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:09.123228 | orchestrator | 2025-05-19 22:06:09 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:09.123253 | orchestrator | 2025-05-19 22:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:12.170984 | orchestrator | 2025-05-19 22:06:12 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:12.171089 | orchestrator | 2025-05-19 22:06:12 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:12.172191 | orchestrator | 2025-05-19 22:06:12 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:12.172214 | orchestrator | 2025-05-19 22:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:15.212748 | orchestrator | 2025-05-19 22:06:15 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:15.213039 | orchestrator | 2025-05-19 22:06:15 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:15.213792 | orchestrator | 2025-05-19 22:06:15 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:15.213864 | orchestrator | 2025-05-19 22:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:18.268807 | orchestrator | 2025-05-19 22:06:18 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:18.269454 | orchestrator | 2025-05-19 22:06:18 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:18.271501 | orchestrator | 2025-05-19 22:06:18 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:18.271631 | orchestrator | 2025-05-19 22:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:21.334835 | orchestrator | 2025-05-19 22:06:21 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:21.337492 | orchestrator | 2025-05-19 22:06:21 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:21.339732 | orchestrator | 2025-05-19 22:06:21 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:21.339758 | orchestrator | 2025-05-19 22:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:24.392593 | orchestrator | 2025-05-19 22:06:24 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:24.392969 | orchestrator | 2025-05-19 22:06:24 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:24.394199 | orchestrator | 2025-05-19 22:06:24 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:24.394239 | orchestrator | 2025-05-19 22:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:27.454401 | orchestrator | 2025-05-19 22:06:27 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:27.456140 | orchestrator | 2025-05-19 22:06:27 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:27.458606 | orchestrator | 2025-05-19 22:06:27 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:27.459328 | orchestrator | 2025-05-19 22:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:30.521016 | orchestrator | 2025-05-19 22:06:30 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:30.533030 | orchestrator | 2025-05-19 22:06:30 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:30.534982 | orchestrator | 2025-05-19 22:06:30 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:30.535017 | orchestrator | 2025-05-19 22:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:33.587360 | orchestrator | 2025-05-19 22:06:33 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:33.588127 | orchestrator | 2025-05-19 22:06:33 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:33.590637 | orchestrator | 2025-05-19 22:06:33 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:33.590659 | orchestrator | 2025-05-19 22:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:36.638271 | orchestrator | 2025-05-19 22:06:36 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:36.639834 | orchestrator | 2025-05-19 22:06:36 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:36.641458 | orchestrator | 2025-05-19 22:06:36 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:36.643647 | orchestrator | 2025-05-19 22:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:39.697573 | orchestrator | 2025-05-19 22:06:39 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:39.698515 | orchestrator | 2025-05-19 22:06:39 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:39.699811 | orchestrator | 2025-05-19 22:06:39 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:39.699823 | orchestrator | 2025-05-19 22:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:42.752795 | orchestrator | 2025-05-19 22:06:42 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:42.753414 | orchestrator | 2025-05-19 22:06:42 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:42.753450 | orchestrator | 2025-05-19 22:06:42 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:42.753785 | orchestrator | 2025-05-19 22:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:45.817685 | orchestrator | 2025-05-19 22:06:45 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:45.820297 | orchestrator | 2025-05-19 22:06:45 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:45.821485 | orchestrator | 2025-05-19 22:06:45 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:45.821530 | orchestrator | 2025-05-19 22:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:48.879574 | orchestrator | 2025-05-19 22:06:48 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:48.879685 | orchestrator | 2025-05-19 22:06:48 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:48.881237 | orchestrator | 2025-05-19 22:06:48 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:48.881315 | orchestrator | 2025-05-19 22:06:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:51.939931 | orchestrator | 2025-05-19 22:06:51 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:51.945378 | orchestrator | 2025-05-19 22:06:51 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:51.950654 | orchestrator | 2025-05-19 22:06:51 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:51.950723 | orchestrator | 2025-05-19 22:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:55.020606 | orchestrator | 2025-05-19 22:06:55 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:55.021601 | orchestrator | 2025-05-19 22:06:55 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state STARTED 2025-05-19 22:06:55.024634 | orchestrator | 2025-05-19 22:06:55 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:55.024713 | orchestrator | 2025-05-19 22:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:06:58.083479 | orchestrator | 2025-05-19 22:06:58 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:06:58.087244 | orchestrator | 2025-05-19 22:06:58 | INFO  | Task 759ad17f-6bf9-4379-b889-f2b758dd4c86 is in state SUCCESS 2025-05-19 22:06:58.087287 | orchestrator | 2025-05-19 22:06:58.089275 | orchestrator | 2025-05-19 22:06:58.089315 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-19 22:06:58.089328 | orchestrator | 2025-05-19 22:06:58.089340 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-19 22:06:58.089376 | orchestrator | Monday 19 May 2025 22:04:49 +0000 (0:00:00.639) 0:00:00.639 ************ 2025-05-19 22:06:58.089388 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:06:58.089400 | orchestrator | 2025-05-19 22:06:58.089411 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-19 22:06:58.089422 | orchestrator | Monday 19 May 2025 22:04:49 +0000 (0:00:00.610) 0:00:01.250 ************ 2025-05-19 22:06:58.089433 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.089445 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.089456 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.089467 | orchestrator | 2025-05-19 22:06:58.089478 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-19 22:06:58.089489 | orchestrator | Monday 19 May 2025 22:04:50 +0000 (0:00:00.610) 0:00:01.861 ************ 2025-05-19 22:06:58.089514 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.089600 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.089658 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.089673 | orchestrator | 2025-05-19 22:06:58.089684 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-19 22:06:58.089695 | orchestrator | Monday 19 May 2025 22:04:50 +0000 (0:00:00.276) 0:00:02.138 ************ 2025-05-19 22:06:58.089706 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.089717 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.089728 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.089738 | orchestrator | 2025-05-19 22:06:58.089749 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-19 22:06:58.089760 | orchestrator | Monday 19 May 2025 22:04:51 +0000 (0:00:00.857) 0:00:02.995 ************ 2025-05-19 22:06:58.089771 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.089782 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.089793 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.089803 | orchestrator | 2025-05-19 22:06:58.089814 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-19 22:06:58.089825 | orchestrator | Monday 19 May 2025 22:04:51 +0000 (0:00:00.273) 0:00:03.269 ************ 2025-05-19 22:06:58.089836 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.089848 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.089861 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.089874 | orchestrator | 2025-05-19 22:06:58.089933 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-19 22:06:58.090736 | orchestrator | Monday 19 May 2025 22:04:52 +0000 (0:00:00.291) 0:00:03.561 ************ 2025-05-19 22:06:58.090769 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.090781 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.090792 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.090803 | orchestrator | 2025-05-19 22:06:58.090814 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-19 22:06:58.090825 | orchestrator | Monday 19 May 2025 22:04:52 +0000 (0:00:00.287) 0:00:03.848 ************ 2025-05-19 22:06:58.090836 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.090848 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.090859 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.090870 | orchestrator | 2025-05-19 22:06:58.090880 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-19 22:06:58.090891 | orchestrator | Monday 19 May 2025 22:04:52 +0000 (0:00:00.462) 0:00:04.311 ************ 2025-05-19 22:06:58.090902 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.090913 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.090924 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.090934 | orchestrator | 2025-05-19 22:06:58.090945 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-19 22:06:58.090956 | orchestrator | Monday 19 May 2025 22:04:53 +0000 (0:00:00.289) 0:00:04.600 ************ 2025-05-19 22:06:58.090967 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 22:06:58.091031 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:06:58.091051 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:06:58.091070 | orchestrator | 2025-05-19 22:06:58.091089 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-19 22:06:58.091108 | orchestrator | Monday 19 May 2025 22:04:53 +0000 (0:00:00.624) 0:00:05.225 ************ 2025-05-19 22:06:58.091127 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.091145 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.091164 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.091183 | orchestrator | 2025-05-19 22:06:58.091238 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-19 22:06:58.091252 | orchestrator | Monday 19 May 2025 22:04:54 +0000 (0:00:00.413) 0:00:05.639 ************ 2025-05-19 22:06:58.091263 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 22:06:58.091274 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:06:58.091285 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:06:58.091295 | orchestrator | 2025-05-19 22:06:58.091306 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-19 22:06:58.091317 | orchestrator | Monday 19 May 2025 22:04:56 +0000 (0:00:02.087) 0:00:07.726 ************ 2025-05-19 22:06:58.091328 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 22:06:58.091342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 22:06:58.091355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 22:06:58.091367 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.091380 | orchestrator | 2025-05-19 22:06:58.091393 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-19 22:06:58.091467 | orchestrator | Monday 19 May 2025 22:04:56 +0000 (0:00:00.395) 0:00:08.122 ************ 2025-05-19 22:06:58.091485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091502 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091538 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.091551 | orchestrator | 2025-05-19 22:06:58.091564 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-19 22:06:58.091577 | orchestrator | Monday 19 May 2025 22:04:57 +0000 (0:00:00.918) 0:00:09.041 ************ 2025-05-19 22:06:58.091592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.091645 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.091656 | orchestrator | 2025-05-19 22:06:58.091667 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-19 22:06:58.091677 | orchestrator | Monday 19 May 2025 22:04:57 +0000 (0:00:00.139) 0:00:09.180 ************ 2025-05-19 22:06:58.091690 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '374c13e40775', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-19 22:04:54.962481', 'end': '2025-05-19 22:04:55.013257', 'delta': '0:00:00.050776', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['374c13e40775'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-19 22:06:58.091705 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e340ae1a2d46', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-19 22:04:55.660385', 'end': '2025-05-19 22:04:55.695003', 'delta': '0:00:00.034618', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e340ae1a2d46'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-19 22:06:58.091752 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd3ddfae0a39c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-19 22:04:56.213102', 'end': '2025-05-19 22:04:56.252077', 'delta': '0:00:00.038975', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d3ddfae0a39c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-19 22:06:58.091766 | orchestrator | 2025-05-19 22:06:58.091778 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-19 22:06:58.091789 | orchestrator | Monday 19 May 2025 22:04:58 +0000 (0:00:00.343) 0:00:09.524 ************ 2025-05-19 22:06:58.091805 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.091818 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.091836 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.091855 | orchestrator | 2025-05-19 22:06:58.091873 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-19 22:06:58.091892 | orchestrator | Monday 19 May 2025 22:04:58 +0000 (0:00:00.415) 0:00:09.940 ************ 2025-05-19 22:06:58.091911 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-19 22:06:58.091931 | orchestrator | 2025-05-19 22:06:58.091948 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-19 22:06:58.091969 | orchestrator | Monday 19 May 2025 22:05:00 +0000 (0:00:01.854) 0:00:11.794 ************ 2025-05-19 22:06:58.091981 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.091992 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092002 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092013 | orchestrator | 2025-05-19 22:06:58.092024 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-19 22:06:58.092035 | orchestrator | Monday 19 May 2025 22:05:00 +0000 (0:00:00.299) 0:00:12.094 ************ 2025-05-19 22:06:58.092045 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092056 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092067 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092078 | orchestrator | 2025-05-19 22:06:58.092089 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 22:06:58.092100 | orchestrator | Monday 19 May 2025 22:05:01 +0000 (0:00:00.417) 0:00:12.511 ************ 2025-05-19 22:06:58.092111 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092122 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092132 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092143 | orchestrator | 2025-05-19 22:06:58.092154 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-19 22:06:58.092165 | orchestrator | Monday 19 May 2025 22:05:01 +0000 (0:00:00.483) 0:00:12.995 ************ 2025-05-19 22:06:58.092176 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.092186 | orchestrator | 2025-05-19 22:06:58.092452 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-19 22:06:58.092466 | orchestrator | Monday 19 May 2025 22:05:01 +0000 (0:00:00.124) 0:00:13.119 ************ 2025-05-19 22:06:58.092477 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092563 | orchestrator | 2025-05-19 22:06:58.092577 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-19 22:06:58.092588 | orchestrator | Monday 19 May 2025 22:05:01 +0000 (0:00:00.223) 0:00:13.342 ************ 2025-05-19 22:06:58.092599 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092610 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092621 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092632 | orchestrator | 2025-05-19 22:06:58.092643 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-19 22:06:58.092654 | orchestrator | Monday 19 May 2025 22:05:02 +0000 (0:00:00.289) 0:00:13.631 ************ 2025-05-19 22:06:58.092665 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092676 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092686 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092697 | orchestrator | 2025-05-19 22:06:58.092708 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-19 22:06:58.092719 | orchestrator | Monday 19 May 2025 22:05:02 +0000 (0:00:00.324) 0:00:13.956 ************ 2025-05-19 22:06:58.092730 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092741 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092751 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092760 | orchestrator | 2025-05-19 22:06:58.092770 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-19 22:06:58.092780 | orchestrator | Monday 19 May 2025 22:05:03 +0000 (0:00:00.523) 0:00:14.480 ************ 2025-05-19 22:06:58.092789 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092799 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092809 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092818 | orchestrator | 2025-05-19 22:06:58.092828 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-19 22:06:58.092838 | orchestrator | Monday 19 May 2025 22:05:03 +0000 (0:00:00.321) 0:00:14.801 ************ 2025-05-19 22:06:58.092847 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092857 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092878 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092888 | orchestrator | 2025-05-19 22:06:58.092898 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-19 22:06:58.092907 | orchestrator | Monday 19 May 2025 22:05:03 +0000 (0:00:00.354) 0:00:15.156 ************ 2025-05-19 22:06:58.092917 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.092927 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.092936 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.092946 | orchestrator | 2025-05-19 22:06:58.092956 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-19 22:06:58.093001 | orchestrator | Monday 19 May 2025 22:05:04 +0000 (0:00:00.339) 0:00:15.496 ************ 2025-05-19 22:06:58.093012 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.093022 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.093031 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.093041 | orchestrator | 2025-05-19 22:06:58.093051 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-19 22:06:58.093060 | orchestrator | Monday 19 May 2025 22:05:04 +0000 (0:00:00.627) 0:00:16.124 ************ 2025-05-19 22:06:58.093079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f', 'dm-uuid-LVM-25Ux91xuT7WiMrBFdwOi1pMwenBIWeCBeiRM36oZY1JX4ZJkb0b2c1NOPE20V9v0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9', 'dm-uuid-LVM-8aMxDAC69wHx71dcpG20Q31tCBflhRmBrlxbrompEEIQX7YSUfTqlUZ2yqTkpcnk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l7ssem-IdnE-BWTE-0Yd7-3cX8-jALR-GFmCDt', 'scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314', 'scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hcS6Z0-3dbh-FsUe-6Hl6-0vR4-DzK8-1zTlE6', 'scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d', 'scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261', 'scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a', 'dm-uuid-LVM-CW4c3NGDdo1fwdkbiKJIdjjJJdnMVj1UxTnxsVSsTxcZWGST2UJuuMus20xQFxB6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52', 'dm-uuid-LVM-yGlbKPYLW6DemIsqRYBfWpD8tvVVslaOYqa3UfTOaNStRqSocsB08xBr6Ha7N511'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093563 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.093581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdbZd6-MD3W-nwco-pvWy-uPaG-COz4-ILzeqO', 'scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8', 'scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-snlF3m-Oi4j-I3Sj-YZaO-WcHw-xu2a-V6LWrW', 'scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305', 'scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3', 'scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093653 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.093663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b', 'dm-uuid-LVM-OqlL2uEqafAGX9iIr2ntluuztK7fkD1t3vrp0n4U7NcdnIkhJg8R2DeijZ9Lmols'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8', 'dm-uuid-LVM-LoMld4gp88uLIixd8sShMrFgxLTqn5lNXnvbLdxHRCJfDqyPk00c82b0G8aSRO5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-19 22:06:58.093794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jGuMqO-44Uk-XNOS-pHB5-fCAH-wQzA-HW6kvE', 'scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70', 'scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dsUu3E-N4Ci-8cAW-iChd-BvQ3-heId-OUpQXI', 'scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6', 'scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba', 'scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-19 22:06:58.093860 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.093870 | orchestrator | 2025-05-19 22:06:58.093880 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-19 22:06:58.093890 | orchestrator | Monday 19 May 2025 22:05:05 +0000 (0:00:00.563) 0:00:16.688 ************ 2025-05-19 22:06:58.093905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f', 'dm-uuid-LVM-25Ux91xuT7WiMrBFdwOi1pMwenBIWeCBeiRM36oZY1JX4ZJkb0b2c1NOPE20V9v0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093916 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9', 'dm-uuid-LVM-8aMxDAC69wHx71dcpG20Q31tCBflhRmBrlxbrompEEIQX7YSUfTqlUZ2yqTkpcnk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093943 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.093994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16', 'scsi-SQEMU_QEMU_HARDDISK_f8343413-00d0-459f-8d8a-4508348eb38f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a', 'dm-uuid-LVM-CW4c3NGDdo1fwdkbiKJIdjjJJdnMVj1UxTnxsVSsTxcZWGST2UJuuMus20xQFxB6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094112 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--52cfe21f--2cf0--5660--8f5b--0412bede7d5f-osd--block--52cfe21f--2cf0--5660--8f5b--0412bede7d5f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-l7ssem-IdnE-BWTE-0Yd7-3cX8-jALR-GFmCDt', 'scsi-0QEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314', 'scsi-SQEMU_QEMU_HARDDISK_65b1a457-74f9-440b-9c0b-913fdfb04314'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52', 'dm-uuid-LVM-yGlbKPYLW6DemIsqRYBfWpD8tvVVslaOYqa3UfTOaNStRqSocsB08xBr6Ha7N511'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9-osd--block--8ad6e576--16ee--5df9--adc2--5fd1c09e2bb9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hcS6Z0-3dbh-FsUe-6Hl6-0vR4-DzK8-1zTlE6', 'scsi-0QEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d', 'scsi-SQEMU_QEMU_HARDDISK_cd626c85-4d79-4ec3-873e-c38f80c6408d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261', 'scsi-SQEMU_QEMU_HARDDISK_5aea9423-7155-4edc-a2c1-cc12eb50d261'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-15-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094215 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094225 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.094236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094251 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094277 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094293 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094313 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b', 'dm-uuid-LVM-OqlL2uEqafAGX9iIr2ntluuztK7fkD1t3vrp0n4U7NcdnIkhJg8R2DeijZ9Lmols'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8', 'dm-uuid-LVM-LoMld4gp88uLIixd8sShMrFgxLTqn5lNXnvbLdxHRCJfDqyPk00c82b0G8aSRO5u'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094347 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16', 'scsi-SQEMU_QEMU_HARDDISK_43b96880-e893-431a-9e82-fe3cb3c87177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094363 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094374 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d2161015--9b2d--55ef--85cd--b20f941db83a-osd--block--d2161015--9b2d--55ef--85cd--b20f941db83a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-JdbZd6-MD3W-nwco-pvWy-uPaG-COz4-ILzeqO', 'scsi-0QEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8', 'scsi-SQEMU_QEMU_HARDDISK_53ed34a9-290d-4031-aa3e-f95b5c6d33b8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094391 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094406 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--73ec3cc1--218e--51bb--a362--2e871742ea52-osd--block--73ec3cc1--218e--51bb--a362--2e871742ea52'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-snlF3m-Oi4j-I3Sj-YZaO-WcHw-xu2a-V6LWrW', 'scsi-0QEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305', 'scsi-SQEMU_QEMU_HARDDISK_934db128-59d0-4992-8eb9-92fedfad2305'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094423 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094434 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3', 'scsi-SQEMU_QEMU_HARDDISK_d1012b89-dbd1-43a9-85f9-d367e08581b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094444 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-21-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094490 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.094500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094542 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16', 'scsi-SQEMU_QEMU_HARDDISK_377dee36-64db-427b-88c3-b195b97ec397-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d6c00661--cf2a--5067--a507--d2ca4df6447b-osd--block--d6c00661--cf2a--5067--a507--d2ca4df6447b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jGuMqO-44Uk-XNOS-pHB5-fCAH-wQzA-HW6kvE', 'scsi-0QEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70', 'scsi-SQEMU_QEMU_HARDDISK_fb54ccde-5cdf-4bdf-8e5b-bd2626265c70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094570 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8-osd--block--cfdd3ed5--b98d--51b3--b2a5--29887bcc1fa8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dsUu3E-N4Ci-8cAW-iChd-BvQ3-heId-OUpQXI', 'scsi-0QEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6', 'scsi-SQEMU_QEMU_HARDDISK_497cbfa2-65b5-4f15-af98-7aa46abcc2e6'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba', 'scsi-SQEMU_QEMU_HARDDISK_1c1b0e05-b224-4a51-87f1-7edfa2f843ba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-19-21-11-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-19 22:06:58.094608 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.094624 | orchestrator | 2025-05-19 22:06:58.094634 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-19 22:06:58.094644 | orchestrator | Monday 19 May 2025 22:05:05 +0000 (0:00:00.584) 0:00:17.273 ************ 2025-05-19 22:06:58.094654 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.094664 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.094673 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.094683 | orchestrator | 2025-05-19 22:06:58.094693 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-19 22:06:58.094702 | orchestrator | Monday 19 May 2025 22:05:06 +0000 (0:00:00.699) 0:00:17.973 ************ 2025-05-19 22:06:58.094712 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.094726 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.094736 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.094746 | orchestrator | 2025-05-19 22:06:58.094756 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 22:06:58.094765 | orchestrator | Monday 19 May 2025 22:05:07 +0000 (0:00:00.451) 0:00:18.424 ************ 2025-05-19 22:06:58.094775 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.094784 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.094794 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.094803 | orchestrator | 2025-05-19 22:06:58.094813 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 22:06:58.094823 | orchestrator | Monday 19 May 2025 22:05:07 +0000 (0:00:00.637) 0:00:19.061 ************ 2025-05-19 22:06:58.094832 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.094842 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.094852 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.094861 | orchestrator | 2025-05-19 22:06:58.094871 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-19 22:06:58.094880 | orchestrator | Monday 19 May 2025 22:05:08 +0000 (0:00:00.317) 0:00:19.379 ************ 2025-05-19 22:06:58.094890 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.094899 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.094909 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.094919 | orchestrator | 2025-05-19 22:06:58.094928 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-19 22:06:58.094938 | orchestrator | Monday 19 May 2025 22:05:08 +0000 (0:00:00.420) 0:00:19.799 ************ 2025-05-19 22:06:58.094948 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.094957 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.094966 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.094976 | orchestrator | 2025-05-19 22:06:58.094986 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-19 22:06:58.094995 | orchestrator | Monday 19 May 2025 22:05:09 +0000 (0:00:00.570) 0:00:20.369 ************ 2025-05-19 22:06:58.095005 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-19 22:06:58.095015 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-19 22:06:58.095025 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-19 22:06:58.095034 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-19 22:06:58.095044 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-19 22:06:58.095053 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-19 22:06:58.095063 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-19 22:06:58.095072 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-19 22:06:58.095082 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-19 22:06:58.095091 | orchestrator | 2025-05-19 22:06:58.095101 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-19 22:06:58.095111 | orchestrator | Monday 19 May 2025 22:05:09 +0000 (0:00:00.848) 0:00:21.217 ************ 2025-05-19 22:06:58.095120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-19 22:06:58.095130 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-19 22:06:58.095146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-19 22:06:58.095156 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095166 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-19 22:06:58.095175 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-19 22:06:58.095185 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-19 22:06:58.095214 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.095224 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-19 22:06:58.095234 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-19 22:06:58.095244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-19 22:06:58.095253 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.095263 | orchestrator | 2025-05-19 22:06:58.095273 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-19 22:06:58.095283 | orchestrator | Monday 19 May 2025 22:05:10 +0000 (0:00:00.383) 0:00:21.601 ************ 2025-05-19 22:06:58.095293 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:06:58.095303 | orchestrator | 2025-05-19 22:06:58.095313 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-19 22:06:58.095323 | orchestrator | Monday 19 May 2025 22:05:10 +0000 (0:00:00.702) 0:00:22.304 ************ 2025-05-19 22:06:58.095333 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095342 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.095352 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.095362 | orchestrator | 2025-05-19 22:06:58.095377 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-19 22:06:58.095387 | orchestrator | Monday 19 May 2025 22:05:11 +0000 (0:00:00.318) 0:00:22.623 ************ 2025-05-19 22:06:58.095396 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095406 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.095416 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.095425 | orchestrator | 2025-05-19 22:06:58.095435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-19 22:06:58.095445 | orchestrator | Monday 19 May 2025 22:05:11 +0000 (0:00:00.304) 0:00:22.928 ************ 2025-05-19 22:06:58.095455 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095464 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.095474 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:06:58.095484 | orchestrator | 2025-05-19 22:06:58.095493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-19 22:06:58.095503 | orchestrator | Monday 19 May 2025 22:05:11 +0000 (0:00:00.325) 0:00:23.253 ************ 2025-05-19 22:06:58.095513 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.095523 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.095537 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.095547 | orchestrator | 2025-05-19 22:06:58.095556 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-19 22:06:58.095566 | orchestrator | Monday 19 May 2025 22:05:12 +0000 (0:00:00.558) 0:00:23.811 ************ 2025-05-19 22:06:58.095576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:06:58.095586 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:06:58.095596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:06:58.095605 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095615 | orchestrator | 2025-05-19 22:06:58.095625 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-19 22:06:58.095634 | orchestrator | Monday 19 May 2025 22:05:12 +0000 (0:00:00.357) 0:00:24.169 ************ 2025-05-19 22:06:58.095644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:06:58.095654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:06:58.095672 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:06:58.095682 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095691 | orchestrator | 2025-05-19 22:06:58.095701 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-19 22:06:58.095711 | orchestrator | Monday 19 May 2025 22:05:13 +0000 (0:00:00.349) 0:00:24.519 ************ 2025-05-19 22:06:58.095721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-19 22:06:58.095731 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-19 22:06:58.095740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-19 22:06:58.095750 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.095759 | orchestrator | 2025-05-19 22:06:58.095769 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-19 22:06:58.095779 | orchestrator | Monday 19 May 2025 22:05:13 +0000 (0:00:00.354) 0:00:24.873 ************ 2025-05-19 22:06:58.095789 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:06:58.095799 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:06:58.095808 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:06:58.095818 | orchestrator | 2025-05-19 22:06:58.095828 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-19 22:06:58.095838 | orchestrator | Monday 19 May 2025 22:05:13 +0000 (0:00:00.302) 0:00:25.175 ************ 2025-05-19 22:06:58.095847 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-19 22:06:58.095857 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-19 22:06:58.095867 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-19 22:06:58.095877 | orchestrator | 2025-05-19 22:06:58.095886 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-19 22:06:58.095896 | orchestrator | Monday 19 May 2025 22:05:14 +0000 (0:00:00.475) 0:00:25.651 ************ 2025-05-19 22:06:58.095906 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 22:06:58.095916 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:06:58.095926 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:06:58.095935 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 22:06:58.095945 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 22:06:58.095955 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 22:06:58.095965 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 22:06:58.095974 | orchestrator | 2025-05-19 22:06:58.095984 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-19 22:06:58.095994 | orchestrator | Monday 19 May 2025 22:05:15 +0000 (0:00:00.941) 0:00:26.593 ************ 2025-05-19 22:06:58.096003 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-19 22:06:58.096013 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-19 22:06:58.096023 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-19 22:06:58.096032 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-19 22:06:58.096042 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-19 22:06:58.096052 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-19 22:06:58.096062 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-19 22:06:58.096071 | orchestrator | 2025-05-19 22:06:58.096085 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-19 22:06:58.096095 | orchestrator | Monday 19 May 2025 22:05:17 +0000 (0:00:01.800) 0:00:28.394 ************ 2025-05-19 22:06:58.096105 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:06:58.096121 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:06:58.096222 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-19 22:06:58.096233 | orchestrator | 2025-05-19 22:06:58.096243 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-19 22:06:58.096253 | orchestrator | Monday 19 May 2025 22:05:17 +0000 (0:00:00.351) 0:00:28.745 ************ 2025-05-19 22:06:58.096263 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:06:58.096280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:06:58.096290 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:06:58.096301 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:06:58.096311 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-19 22:06:58.096321 | orchestrator | 2025-05-19 22:06:58.096331 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-19 22:06:58.096341 | orchestrator | Monday 19 May 2025 22:06:03 +0000 (0:00:45.603) 0:01:14.348 ************ 2025-05-19 22:06:58.096351 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096370 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096380 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096409 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-19 22:06:58.096419 | orchestrator | 2025-05-19 22:06:58.096429 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-19 22:06:58.096439 | orchestrator | Monday 19 May 2025 22:06:26 +0000 (0:00:23.045) 0:01:37.394 ************ 2025-05-19 22:06:58.096448 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096458 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096468 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096477 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096487 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096497 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096507 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-19 22:06:58.096523 | orchestrator | 2025-05-19 22:06:58.096533 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-19 22:06:58.096543 | orchestrator | Monday 19 May 2025 22:06:37 +0000 (0:00:11.587) 0:01:48.981 ************ 2025-05-19 22:06:58.096553 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096563 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096572 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096582 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096592 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096601 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096617 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096627 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096647 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096657 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096667 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096676 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096686 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096696 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096711 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-19 22:06:58.096721 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-19 22:06:58.096731 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-19 22:06:58.096740 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-19 22:06:58.096750 | orchestrator | 2025-05-19 22:06:58.096760 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:06:58.096770 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-19 22:06:58.096781 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-19 22:06:58.096791 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-19 22:06:58.096801 | orchestrator | 2025-05-19 22:06:58.096811 | orchestrator | 2025-05-19 22:06:58.096820 | orchestrator | 2025-05-19 22:06:58.096830 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:06:58.096840 | orchestrator | Monday 19 May 2025 22:06:54 +0000 (0:00:17.272) 0:02:06.254 ************ 2025-05-19 22:06:58.096850 | orchestrator | =============================================================================== 2025-05-19 22:06:58.096860 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.60s 2025-05-19 22:06:58.096870 | orchestrator | generate keys ---------------------------------------------------------- 23.05s 2025-05-19 22:06:58.096879 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.27s 2025-05-19 22:06:58.096889 | orchestrator | get keys from monitors ------------------------------------------------- 11.59s 2025-05-19 22:06:58.096899 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.09s 2025-05-19 22:06:58.096908 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.85s 2025-05-19 22:06:58.096924 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.80s 2025-05-19 22:06:58.096934 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-05-19 22:06:58.096943 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.92s 2025-05-19 22:06:58.096953 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2025-05-19 22:06:58.096963 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-05-19 22:06:58.096972 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-05-19 22:06:58.096982 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.70s 2025-05-19 22:06:58.096991 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2025-05-19 22:06:58.097001 | orchestrator | ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks --- 0.63s 2025-05-19 22:06:58.097011 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-05-19 22:06:58.097020 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.61s 2025-05-19 22:06:58.097030 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2025-05-19 22:06:58.097040 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-05-19 22:06:58.097049 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.57s 2025-05-19 22:06:58.097059 | orchestrator | 2025-05-19 22:06:58 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:06:58.097069 | orchestrator | 2025-05-19 22:06:58 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:06:58.097079 | orchestrator | 2025-05-19 22:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:01.148286 | orchestrator | 2025-05-19 22:07:01 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:01.149881 | orchestrator | 2025-05-19 22:07:01 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:01.151791 | orchestrator | 2025-05-19 22:07:01 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:01.151834 | orchestrator | 2025-05-19 22:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:04.207082 | orchestrator | 2025-05-19 22:07:04 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:04.210241 | orchestrator | 2025-05-19 22:07:04 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:04.211262 | orchestrator | 2025-05-19 22:07:04 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:04.211299 | orchestrator | 2025-05-19 22:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:07.260545 | orchestrator | 2025-05-19 22:07:07 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:07.263092 | orchestrator | 2025-05-19 22:07:07 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:07.265116 | orchestrator | 2025-05-19 22:07:07 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:07.265146 | orchestrator | 2025-05-19 22:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:10.319292 | orchestrator | 2025-05-19 22:07:10 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:10.320264 | orchestrator | 2025-05-19 22:07:10 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:10.321470 | orchestrator | 2025-05-19 22:07:10 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:10.321549 | orchestrator | 2025-05-19 22:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:13.380595 | orchestrator | 2025-05-19 22:07:13 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:13.383205 | orchestrator | 2025-05-19 22:07:13 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:13.385742 | orchestrator | 2025-05-19 22:07:13 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:13.385788 | orchestrator | 2025-05-19 22:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:16.461698 | orchestrator | 2025-05-19 22:07:16 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:16.463073 | orchestrator | 2025-05-19 22:07:16 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:16.465655 | orchestrator | 2025-05-19 22:07:16 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:16.465778 | orchestrator | 2025-05-19 22:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:19.528439 | orchestrator | 2025-05-19 22:07:19 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:19.530535 | orchestrator | 2025-05-19 22:07:19 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:19.532428 | orchestrator | 2025-05-19 22:07:19 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:19.532458 | orchestrator | 2025-05-19 22:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:22.588753 | orchestrator | 2025-05-19 22:07:22 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:22.595387 | orchestrator | 2025-05-19 22:07:22 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:22.596989 | orchestrator | 2025-05-19 22:07:22 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:22.597127 | orchestrator | 2025-05-19 22:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:25.661902 | orchestrator | 2025-05-19 22:07:25 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:25.663062 | orchestrator | 2025-05-19 22:07:25 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:25.665466 | orchestrator | 2025-05-19 22:07:25 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state STARTED 2025-05-19 22:07:25.665513 | orchestrator | 2025-05-19 22:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:28.732940 | orchestrator | 2025-05-19 22:07:28 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:28.735429 | orchestrator | 2025-05-19 22:07:28 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:28.738108 | orchestrator | 2025-05-19 22:07:28 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:28.740497 | orchestrator | 2025-05-19 22:07:28 | INFO  | Task 2cb31017-e79d-45b0-ac59-fac3d2572edf is in state SUCCESS 2025-05-19 22:07:28.740726 | orchestrator | 2025-05-19 22:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:31.807303 | orchestrator | 2025-05-19 22:07:31 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:31.810305 | orchestrator | 2025-05-19 22:07:31 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:31.811889 | orchestrator | 2025-05-19 22:07:31 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:31.812260 | orchestrator | 2025-05-19 22:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:34.870418 | orchestrator | 2025-05-19 22:07:34 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:34.870745 | orchestrator | 2025-05-19 22:07:34 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:34.874665 | orchestrator | 2025-05-19 22:07:34 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:34.874701 | orchestrator | 2025-05-19 22:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:37.923476 | orchestrator | 2025-05-19 22:07:37 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:37.924476 | orchestrator | 2025-05-19 22:07:37 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:37.925901 | orchestrator | 2025-05-19 22:07:37 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:37.925939 | orchestrator | 2025-05-19 22:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:40.969249 | orchestrator | 2025-05-19 22:07:40 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state STARTED 2025-05-19 22:07:40.970003 | orchestrator | 2025-05-19 22:07:40 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:40.972665 | orchestrator | 2025-05-19 22:07:40 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:40.972710 | orchestrator | 2025-05-19 22:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:44.028753 | orchestrator | 2025-05-19 22:07:44.028863 | orchestrator | 2025-05-19 22:07:44.028881 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-19 22:07:44.029192 | orchestrator | 2025-05-19 22:07:44.029213 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-19 22:07:44.029226 | orchestrator | Monday 19 May 2025 22:06:59 +0000 (0:00:00.169) 0:00:00.169 ************ 2025-05-19 22:07:44.029237 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-19 22:07:44.029249 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029260 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029271 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:07:44.029282 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029293 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-19 22:07:44.029304 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-19 22:07:44.029315 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-19 22:07:44.029325 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-19 22:07:44.029337 | orchestrator | 2025-05-19 22:07:44.029347 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-19 22:07:44.029359 | orchestrator | Monday 19 May 2025 22:07:04 +0000 (0:00:04.395) 0:00:04.564 ************ 2025-05-19 22:07:44.029370 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-19 22:07:44.029382 | orchestrator | 2025-05-19 22:07:44.029393 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-19 22:07:44.029431 | orchestrator | Monday 19 May 2025 22:07:05 +0000 (0:00:01.069) 0:00:05.634 ************ 2025-05-19 22:07:44.029443 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-19 22:07:44.029455 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029466 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029477 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:07:44.029487 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029498 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-19 22:07:44.029509 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-19 22:07:44.029520 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-19 22:07:44.029531 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-19 22:07:44.029542 | orchestrator | 2025-05-19 22:07:44.029553 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-19 22:07:44.029566 | orchestrator | Monday 19 May 2025 22:07:19 +0000 (0:00:13.906) 0:00:19.541 ************ 2025-05-19 22:07:44.029585 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-19 22:07:44.029603 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029621 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029661 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:07:44.029681 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-19 22:07:44.029699 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-19 22:07:44.029710 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-19 22:07:44.029721 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-19 22:07:44.029732 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-19 22:07:44.029743 | orchestrator | 2025-05-19 22:07:44.029753 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:07:44.029764 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:07:44.029777 | orchestrator | 2025-05-19 22:07:44.029788 | orchestrator | 2025-05-19 22:07:44.029800 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:07:44.029813 | orchestrator | Monday 19 May 2025 22:07:26 +0000 (0:00:07.324) 0:00:26.866 ************ 2025-05-19 22:07:44.029825 | orchestrator | =============================================================================== 2025-05-19 22:07:44.029838 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.91s 2025-05-19 22:07:44.029851 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.33s 2025-05-19 22:07:44.029863 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.40s 2025-05-19 22:07:44.029876 | orchestrator | Create share directory -------------------------------------------------- 1.07s 2025-05-19 22:07:44.029889 | orchestrator | 2025-05-19 22:07:44.029902 | orchestrator | 2025-05-19 22:07:44.029915 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:07:44.029928 | orchestrator | 2025-05-19 22:07:44.029957 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:07:44.029971 | orchestrator | Monday 19 May 2025 22:06:01 +0000 (0:00:00.189) 0:00:00.189 ************ 2025-05-19 22:07:44.029984 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.029997 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.030011 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.030122 | orchestrator | 2025-05-19 22:07:44.030137 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:07:44.030149 | orchestrator | Monday 19 May 2025 22:06:01 +0000 (0:00:00.236) 0:00:00.426 ************ 2025-05-19 22:07:44.030160 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-19 22:07:44.030171 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-19 22:07:44.030182 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-19 22:07:44.030193 | orchestrator | 2025-05-19 22:07:44.030204 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-19 22:07:44.030215 | orchestrator | 2025-05-19 22:07:44.030226 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 22:07:44.030236 | orchestrator | Monday 19 May 2025 22:06:01 +0000 (0:00:00.346) 0:00:00.772 ************ 2025-05-19 22:07:44.030247 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:07:44.030258 | orchestrator | 2025-05-19 22:07:44.030269 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-19 22:07:44.030280 | orchestrator | Monday 19 May 2025 22:06:02 +0000 (0:00:00.439) 0:00:01.212 ************ 2025-05-19 22:07:44.030305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.030339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.030368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.030495 | orchestrator | 2025-05-19 22:07:44.030510 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-19 22:07:44.030521 | orchestrator | Monday 19 May 2025 22:06:02 +0000 (0:00:00.869) 0:00:02.081 ************ 2025-05-19 22:07:44.030532 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.030543 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.030564 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.030575 | orchestrator | 2025-05-19 22:07:44.030586 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 22:07:44.030597 | orchestrator | Monday 19 May 2025 22:06:03 +0000 (0:00:00.383) 0:00:02.464 ************ 2025-05-19 22:07:44.030611 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 22:07:44.030631 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 22:07:44.030660 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 22:07:44.030681 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 22:07:44.030700 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 22:07:44.030714 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 22:07:44.030725 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-19 22:07:44.030736 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 22:07:44.030746 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 22:07:44.030757 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 22:07:44.030768 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 22:07:44.030779 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 22:07:44.030790 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 22:07:44.030801 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 22:07:44.030812 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-19 22:07:44.030823 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 22:07:44.030834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-19 22:07:44.030845 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-19 22:07:44.030856 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-19 22:07:44.030866 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-19 22:07:44.030877 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-19 22:07:44.030888 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-19 22:07:44.030899 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-19 22:07:44.030910 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-19 22:07:44.030922 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-19 22:07:44.030935 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-19 22:07:44.030946 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-19 22:07:44.030957 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-19 22:07:44.030968 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-19 22:07:44.030993 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-19 22:07:44.031005 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-19 22:07:44.031016 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-19 22:07:44.031027 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-19 22:07:44.031038 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-19 22:07:44.031049 | orchestrator | 2025-05-19 22:07:44.031060 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.031072 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:00.641) 0:00:03.106 ************ 2025-05-19 22:07:44.031083 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.031145 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.031163 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.031176 | orchestrator | 2025-05-19 22:07:44.031189 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.031201 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:00.261) 0:00:03.368 ************ 2025-05-19 22:07:44.031215 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031228 | orchestrator | 2025-05-19 22:07:44.031240 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.031261 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:00.145) 0:00:03.513 ************ 2025-05-19 22:07:44.031275 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031288 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.031301 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.031313 | orchestrator | 2025-05-19 22:07:44.031326 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.031339 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:00.562) 0:00:04.076 ************ 2025-05-19 22:07:44.031351 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.031364 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.031377 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.031389 | orchestrator | 2025-05-19 22:07:44.031402 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.031415 | orchestrator | Monday 19 May 2025 22:06:05 +0000 (0:00:00.323) 0:00:04.399 ************ 2025-05-19 22:07:44.031428 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031440 | orchestrator | 2025-05-19 22:07:44.031453 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.031464 | orchestrator | Monday 19 May 2025 22:06:05 +0000 (0:00:00.135) 0:00:04.535 ************ 2025-05-19 22:07:44.031475 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031486 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.031496 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.031507 | orchestrator | 2025-05-19 22:07:44.031518 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.031529 | orchestrator | Monday 19 May 2025 22:06:05 +0000 (0:00:00.287) 0:00:04.822 ************ 2025-05-19 22:07:44.031540 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.031551 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.031562 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.031572 | orchestrator | 2025-05-19 22:07:44.031583 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.031594 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.267) 0:00:05.090 ************ 2025-05-19 22:07:44.031605 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031625 | orchestrator | 2025-05-19 22:07:44.031636 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.031647 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.304) 0:00:05.394 ************ 2025-05-19 22:07:44.031658 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031670 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.031681 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.031691 | orchestrator | 2025-05-19 22:07:44.031702 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.031713 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.305) 0:00:05.700 ************ 2025-05-19 22:07:44.031724 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.031735 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.031746 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.031757 | orchestrator | 2025-05-19 22:07:44.031767 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.031779 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.295) 0:00:05.995 ************ 2025-05-19 22:07:44.031790 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031801 | orchestrator | 2025-05-19 22:07:44.031811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.031822 | orchestrator | Monday 19 May 2025 22:06:07 +0000 (0:00:00.132) 0:00:06.128 ************ 2025-05-19 22:07:44.031833 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031844 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.031855 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.031866 | orchestrator | 2025-05-19 22:07:44.031877 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.031887 | orchestrator | Monday 19 May 2025 22:06:07 +0000 (0:00:00.283) 0:00:06.412 ************ 2025-05-19 22:07:44.031898 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.031909 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.031920 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.031930 | orchestrator | 2025-05-19 22:07:44.031941 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.031952 | orchestrator | Monday 19 May 2025 22:06:07 +0000 (0:00:00.480) 0:00:06.892 ************ 2025-05-19 22:07:44.031976 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.031987 | orchestrator | 2025-05-19 22:07:44.031998 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032009 | orchestrator | Monday 19 May 2025 22:06:07 +0000 (0:00:00.126) 0:00:07.019 ************ 2025-05-19 22:07:44.032019 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032030 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032041 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032052 | orchestrator | 2025-05-19 22:07:44.032062 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.032073 | orchestrator | Monday 19 May 2025 22:06:08 +0000 (0:00:00.290) 0:00:07.309 ************ 2025-05-19 22:07:44.032084 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.032123 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.032136 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.032147 | orchestrator | 2025-05-19 22:07:44.032158 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.032169 | orchestrator | Monday 19 May 2025 22:06:08 +0000 (0:00:00.292) 0:00:07.602 ************ 2025-05-19 22:07:44.032180 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032191 | orchestrator | 2025-05-19 22:07:44.032202 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032213 | orchestrator | Monday 19 May 2025 22:06:08 +0000 (0:00:00.109) 0:00:07.711 ************ 2025-05-19 22:07:44.032224 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032235 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032245 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032256 | orchestrator | 2025-05-19 22:07:44.032274 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.032286 | orchestrator | Monday 19 May 2025 22:06:09 +0000 (0:00:00.451) 0:00:08.162 ************ 2025-05-19 22:07:44.032297 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.032308 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.032318 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.032329 | orchestrator | 2025-05-19 22:07:44.032347 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.032358 | orchestrator | Monday 19 May 2025 22:06:09 +0000 (0:00:00.318) 0:00:08.481 ************ 2025-05-19 22:07:44.032369 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032380 | orchestrator | 2025-05-19 22:07:44.032391 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032402 | orchestrator | Monday 19 May 2025 22:06:09 +0000 (0:00:00.127) 0:00:08.609 ************ 2025-05-19 22:07:44.032413 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032424 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032435 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032445 | orchestrator | 2025-05-19 22:07:44.032456 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.032467 | orchestrator | Monday 19 May 2025 22:06:09 +0000 (0:00:00.271) 0:00:08.880 ************ 2025-05-19 22:07:44.032478 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.032489 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.032499 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.032510 | orchestrator | 2025-05-19 22:07:44.032521 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.032532 | orchestrator | Monday 19 May 2025 22:06:10 +0000 (0:00:00.300) 0:00:09.181 ************ 2025-05-19 22:07:44.032543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032554 | orchestrator | 2025-05-19 22:07:44.032565 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032576 | orchestrator | Monday 19 May 2025 22:06:10 +0000 (0:00:00.120) 0:00:09.301 ************ 2025-05-19 22:07:44.032586 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032597 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032608 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032619 | orchestrator | 2025-05-19 22:07:44.032630 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.032641 | orchestrator | Monday 19 May 2025 22:06:10 +0000 (0:00:00.482) 0:00:09.783 ************ 2025-05-19 22:07:44.032652 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.032663 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.032674 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.032685 | orchestrator | 2025-05-19 22:07:44.032696 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.032706 | orchestrator | Monday 19 May 2025 22:06:11 +0000 (0:00:00.300) 0:00:10.083 ************ 2025-05-19 22:07:44.032717 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032728 | orchestrator | 2025-05-19 22:07:44.032739 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032750 | orchestrator | Monday 19 May 2025 22:06:11 +0000 (0:00:00.126) 0:00:10.210 ************ 2025-05-19 22:07:44.032761 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032772 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032783 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032793 | orchestrator | 2025-05-19 22:07:44.032804 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-19 22:07:44.032815 | orchestrator | Monday 19 May 2025 22:06:11 +0000 (0:00:00.295) 0:00:10.505 ************ 2025-05-19 22:07:44.032826 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:07:44.032837 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:07:44.032848 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:07:44.032859 | orchestrator | 2025-05-19 22:07:44.032870 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-19 22:07:44.032887 | orchestrator | Monday 19 May 2025 22:06:12 +0000 (0:00:00.619) 0:00:11.125 ************ 2025-05-19 22:07:44.032898 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032909 | orchestrator | 2025-05-19 22:07:44.032920 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-19 22:07:44.032931 | orchestrator | Monday 19 May 2025 22:06:12 +0000 (0:00:00.132) 0:00:11.257 ************ 2025-05-19 22:07:44.032942 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.032952 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.032963 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.032974 | orchestrator | 2025-05-19 22:07:44.032985 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-19 22:07:44.033002 | orchestrator | Monday 19 May 2025 22:06:12 +0000 (0:00:00.317) 0:00:11.574 ************ 2025-05-19 22:07:44.033014 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:07:44.033024 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:07:44.033035 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:07:44.033046 | orchestrator | 2025-05-19 22:07:44.033057 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-19 22:07:44.033068 | orchestrator | Monday 19 May 2025 22:06:14 +0000 (0:00:01.624) 0:00:13.198 ************ 2025-05-19 22:07:44.033078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 22:07:44.033089 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 22:07:44.033123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-19 22:07:44.033135 | orchestrator | 2025-05-19 22:07:44.033146 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-19 22:07:44.033157 | orchestrator | Monday 19 May 2025 22:06:16 +0000 (0:00:01.980) 0:00:15.179 ************ 2025-05-19 22:07:44.033168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 22:07:44.033179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 22:07:44.033190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-19 22:07:44.033201 | orchestrator | 2025-05-19 22:07:44.033211 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-19 22:07:44.033222 | orchestrator | Monday 19 May 2025 22:06:18 +0000 (0:00:02.489) 0:00:17.668 ************ 2025-05-19 22:07:44.033239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 22:07:44.033250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 22:07:44.033261 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-19 22:07:44.033272 | orchestrator | 2025-05-19 22:07:44.033283 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-19 22:07:44.033294 | orchestrator | Monday 19 May 2025 22:06:20 +0000 (0:00:01.703) 0:00:19.371 ************ 2025-05-19 22:07:44.033305 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.033316 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.033327 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.033338 | orchestrator | 2025-05-19 22:07:44.033349 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-19 22:07:44.033359 | orchestrator | Monday 19 May 2025 22:06:20 +0000 (0:00:00.329) 0:00:19.701 ************ 2025-05-19 22:07:44.033370 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.033381 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.033392 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.033403 | orchestrator | 2025-05-19 22:07:44.033414 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 22:07:44.033433 | orchestrator | Monday 19 May 2025 22:06:20 +0000 (0:00:00.340) 0:00:20.041 ************ 2025-05-19 22:07:44.033444 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:07:44.033455 | orchestrator | 2025-05-19 22:07:44.033466 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-19 22:07:44.033477 | orchestrator | Monday 19 May 2025 22:06:21 +0000 (0:00:00.752) 0:00:20.793 ************ 2025-05-19 22:07:44.033496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033559 | orchestrator | 2025-05-19 22:07:44.033570 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-19 22:07:44.033581 | orchestrator | Monday 19 May 2025 22:06:23 +0000 (0:00:01.676) 0:00:22.470 ************ 2025-05-19 22:07:44.033603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE2025-05-19 22:07:44 | INFO  | Task af2a0414-fedf-4e76-98c1-a0ea1c01644f is in state SUCCESS 2025-05-19 22:07:44.033617 | orchestrator | ': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033640 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.033664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033678 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.033690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033710 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.033721 | orchestrator | 2025-05-19 22:07:44.033732 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-19 22:07:44.033743 | orchestrator | Monday 19 May 2025 22:06:24 +0000 (0:00:00.821) 0:00:23.291 ************ 2025-05-19 22:07:44.033769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033788 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.033801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033812 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.033836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-19 22:07:44.033856 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.033867 | orchestrator | 2025-05-19 22:07:44.033878 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-19 22:07:44.033889 | orchestrator | Monday 19 May 2025 22:06:25 +0000 (0:00:01.132) 0:00:24.424 ************ 2025-05-19 22:07:44.033910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-19 22:07:44.033970 | orchestrator | 2025-05-19 22:07:44.033981 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 22:07:44.033992 | orchestrator | Monday 19 May 2025 22:06:26 +0000 (0:00:01.188) 0:00:25.612 ************ 2025-05-19 22:07:44.034003 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:07:44.034014 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:07:44.034060 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:07:44.034071 | orchestrator | 2025-05-19 22:07:44.034082 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-19 22:07:44.034092 | orchestrator | Monday 19 May 2025 22:06:26 +0000 (0:00:00.328) 0:00:25.941 ************ 2025-05-19 22:07:44.034139 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:07:44.034151 | orchestrator | 2025-05-19 22:07:44.034162 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-19 22:07:44.034181 | orchestrator | Monday 19 May 2025 22:06:27 +0000 (0:00:00.735) 0:00:26.676 ************ 2025-05-19 22:07:44.034193 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:07:44.034204 | orchestrator | 2025-05-19 22:07:44.034215 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-19 22:07:44.034226 | orchestrator | Monday 19 May 2025 22:06:29 +0000 (0:00:02.101) 0:00:28.778 ************ 2025-05-19 22:07:44.034237 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:07:44.034248 | orchestrator | 2025-05-19 22:07:44.034259 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-19 22:07:44.034269 | orchestrator | Monday 19 May 2025 22:06:31 +0000 (0:00:02.052) 0:00:30.831 ************ 2025-05-19 22:07:44.034280 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:07:44.034291 | orchestrator | 2025-05-19 22:07:44.034302 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 22:07:44.034313 | orchestrator | Monday 19 May 2025 22:06:46 +0000 (0:00:14.566) 0:00:45.397 ************ 2025-05-19 22:07:44.034324 | orchestrator | 2025-05-19 22:07:44.034335 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 22:07:44.034346 | orchestrator | Monday 19 May 2025 22:06:46 +0000 (0:00:00.063) 0:00:45.460 ************ 2025-05-19 22:07:44.034356 | orchestrator | 2025-05-19 22:07:44.034367 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-19 22:07:44.034378 | orchestrator | Monday 19 May 2025 22:06:46 +0000 (0:00:00.074) 0:00:45.535 ************ 2025-05-19 22:07:44.034389 | orchestrator | 2025-05-19 22:07:44.034400 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-19 22:07:44.034411 | orchestrator | Monday 19 May 2025 22:06:46 +0000 (0:00:00.065) 0:00:45.600 ************ 2025-05-19 22:07:44.034422 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:07:44.034432 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:07:44.034444 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:07:44.034455 | orchestrator | 2025-05-19 22:07:44.034466 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:07:44.034477 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-19 22:07:44.034488 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-19 22:07:44.034499 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-19 22:07:44.034510 | orchestrator | 2025-05-19 22:07:44.034521 | orchestrator | 2025-05-19 22:07:44.034532 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:07:44.034543 | orchestrator | Monday 19 May 2025 22:07:43 +0000 (0:00:56.851) 0:01:42.452 ************ 2025-05-19 22:07:44.034553 | orchestrator | =============================================================================== 2025-05-19 22:07:44.034564 | orchestrator | horizon : Restart horizon container ------------------------------------ 56.85s 2025-05-19 22:07:44.034575 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.57s 2025-05-19 22:07:44.034586 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.49s 2025-05-19 22:07:44.034597 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.10s 2025-05-19 22:07:44.034607 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.05s 2025-05-19 22:07:44.034618 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.98s 2025-05-19 22:07:44.034635 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.70s 2025-05-19 22:07:44.034646 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2025-05-19 22:07:44.034657 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.62s 2025-05-19 22:07:44.034673 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.19s 2025-05-19 22:07:44.034684 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.13s 2025-05-19 22:07:44.034695 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.87s 2025-05-19 22:07:44.034706 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.82s 2025-05-19 22:07:44.034716 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-05-19 22:07:44.034727 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-05-19 22:07:44.034738 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2025-05-19 22:07:44.034749 | orchestrator | horizon : Update policy file name --------------------------------------- 0.62s 2025-05-19 22:07:44.034760 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-05-19 22:07:44.034770 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2025-05-19 22:07:44.034781 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2025-05-19 22:07:44.034792 | orchestrator | 2025-05-19 22:07:44 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:44.034804 | orchestrator | 2025-05-19 22:07:44 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:44.034815 | orchestrator | 2025-05-19 22:07:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:47.083325 | orchestrator | 2025-05-19 22:07:47 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:47.087627 | orchestrator | 2025-05-19 22:07:47 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:47.087717 | orchestrator | 2025-05-19 22:07:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:50.136893 | orchestrator | 2025-05-19 22:07:50 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:50.139380 | orchestrator | 2025-05-19 22:07:50 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:50.139415 | orchestrator | 2025-05-19 22:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:53.182683 | orchestrator | 2025-05-19 22:07:53 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:53.184536 | orchestrator | 2025-05-19 22:07:53 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:53.184595 | orchestrator | 2025-05-19 22:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:56.232688 | orchestrator | 2025-05-19 22:07:56 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:56.234712 | orchestrator | 2025-05-19 22:07:56 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:56.234749 | orchestrator | 2025-05-19 22:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:07:59.290117 | orchestrator | 2025-05-19 22:07:59 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:07:59.291550 | orchestrator | 2025-05-19 22:07:59 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:07:59.291587 | orchestrator | 2025-05-19 22:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:02.346234 | orchestrator | 2025-05-19 22:08:02 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:02.347571 | orchestrator | 2025-05-19 22:08:02 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:02.347604 | orchestrator | 2025-05-19 22:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:05.400891 | orchestrator | 2025-05-19 22:08:05 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:05.402767 | orchestrator | 2025-05-19 22:08:05 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:05.402784 | orchestrator | 2025-05-19 22:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:08.449949 | orchestrator | 2025-05-19 22:08:08 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:08.451785 | orchestrator | 2025-05-19 22:08:08 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:08.451852 | orchestrator | 2025-05-19 22:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:11.506707 | orchestrator | 2025-05-19 22:08:11 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:11.508572 | orchestrator | 2025-05-19 22:08:11 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:11.508658 | orchestrator | 2025-05-19 22:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:14.561825 | orchestrator | 2025-05-19 22:08:14 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:14.563733 | orchestrator | 2025-05-19 22:08:14 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:14.563770 | orchestrator | 2025-05-19 22:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:17.618429 | orchestrator | 2025-05-19 22:08:17 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:17.620143 | orchestrator | 2025-05-19 22:08:17 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:17.620173 | orchestrator | 2025-05-19 22:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:20.670317 | orchestrator | 2025-05-19 22:08:20 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state STARTED 2025-05-19 22:08:20.675180 | orchestrator | 2025-05-19 22:08:20 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:20.675232 | orchestrator | 2025-05-19 22:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:23.743817 | orchestrator | 2025-05-19 22:08:23 | INFO  | Task 7842f78f-41ce-4a1e-a6ad-5fd8baa0d4e3 is in state SUCCESS 2025-05-19 22:08:23.744197 | orchestrator | 2025-05-19 22:08:23 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:23.744226 | orchestrator | 2025-05-19 22:08:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:26.814204 | orchestrator | 2025-05-19 22:08:26 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:26.814298 | orchestrator | 2025-05-19 22:08:26 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:26.815352 | orchestrator | 2025-05-19 22:08:26 | INFO  | Task 3f857446-3410-4fd1-9cad-d76e24469f6e is in state STARTED 2025-05-19 22:08:26.816365 | orchestrator | 2025-05-19 22:08:26 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:26.816408 | orchestrator | 2025-05-19 22:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:29.861927 | orchestrator | 2025-05-19 22:08:29 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:29.870224 | orchestrator | 2025-05-19 22:08:29 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:29.870295 | orchestrator | 2025-05-19 22:08:29 | INFO  | Task 3f857446-3410-4fd1-9cad-d76e24469f6e is in state SUCCESS 2025-05-19 22:08:29.878461 | orchestrator | 2025-05-19 22:08:29 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:29.878539 | orchestrator | 2025-05-19 22:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:32.932689 | orchestrator | 2025-05-19 22:08:32 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:32.933215 | orchestrator | 2025-05-19 22:08:32 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:32.935137 | orchestrator | 2025-05-19 22:08:32 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:32.936539 | orchestrator | 2025-05-19 22:08:32 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:32.939140 | orchestrator | 2025-05-19 22:08:32 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:32.939169 | orchestrator | 2025-05-19 22:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:35.980947 | orchestrator | 2025-05-19 22:08:35 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:35.985611 | orchestrator | 2025-05-19 22:08:35 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:35.987417 | orchestrator | 2025-05-19 22:08:35 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:35.989296 | orchestrator | 2025-05-19 22:08:35 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:35.992233 | orchestrator | 2025-05-19 22:08:35 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:35.992272 | orchestrator | 2025-05-19 22:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:39.028977 | orchestrator | 2025-05-19 22:08:39 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:39.031345 | orchestrator | 2025-05-19 22:08:39 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:39.032457 | orchestrator | 2025-05-19 22:08:39 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:39.034162 | orchestrator | 2025-05-19 22:08:39 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state STARTED 2025-05-19 22:08:39.035485 | orchestrator | 2025-05-19 22:08:39 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:39.035526 | orchestrator | 2025-05-19 22:08:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:42.080067 | orchestrator | 2025-05-19 22:08:42 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:42.081458 | orchestrator | 2025-05-19 22:08:42 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:42.082176 | orchestrator | 2025-05-19 22:08:42 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:42.083808 | orchestrator | 2025-05-19 22:08:42 | INFO  | Task 539dc29c-da6d-413e-8765-0e5fdb994f82 is in state SUCCESS 2025-05-19 22:08:42.085415 | orchestrator | 2025-05-19 22:08:42.085448 | orchestrator | 2025-05-19 22:08:42.085461 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-19 22:08:42.085598 | orchestrator | 2025-05-19 22:08:42.085614 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-19 22:08:42.085625 | orchestrator | Monday 19 May 2025 22:07:30 +0000 (0:00:00.240) 0:00:00.240 ************ 2025-05-19 22:08:42.085637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-19 22:08:42.085648 | orchestrator | 2025-05-19 22:08:42.085660 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-19 22:08:42.085672 | orchestrator | Monday 19 May 2025 22:07:31 +0000 (0:00:00.222) 0:00:00.463 ************ 2025-05-19 22:08:42.086623 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-19 22:08:42.086643 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-19 22:08:42.086654 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-19 22:08:42.086665 | orchestrator | 2025-05-19 22:08:42.086676 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-19 22:08:42.086688 | orchestrator | Monday 19 May 2025 22:07:32 +0000 (0:00:01.218) 0:00:01.681 ************ 2025-05-19 22:08:42.086698 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-19 22:08:42.086709 | orchestrator | 2025-05-19 22:08:42.086720 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-19 22:08:42.086731 | orchestrator | Monday 19 May 2025 22:07:33 +0000 (0:00:01.205) 0:00:02.887 ************ 2025-05-19 22:08:42.086742 | orchestrator | changed: [testbed-manager] 2025-05-19 22:08:42.086753 | orchestrator | 2025-05-19 22:08:42.086764 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-19 22:08:42.086775 | orchestrator | Monday 19 May 2025 22:07:34 +0000 (0:00:01.129) 0:00:04.017 ************ 2025-05-19 22:08:42.086786 | orchestrator | changed: [testbed-manager] 2025-05-19 22:08:42.086797 | orchestrator | 2025-05-19 22:08:42.086808 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-19 22:08:42.086819 | orchestrator | Monday 19 May 2025 22:07:35 +0000 (0:00:00.950) 0:00:04.968 ************ 2025-05-19 22:08:42.086829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-19 22:08:42.086840 | orchestrator | ok: [testbed-manager] 2025-05-19 22:08:42.086851 | orchestrator | 2025-05-19 22:08:42.086862 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-19 22:08:42.086873 | orchestrator | Monday 19 May 2025 22:08:13 +0000 (0:00:37.410) 0:00:42.378 ************ 2025-05-19 22:08:42.086884 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-19 22:08:42.086895 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-19 22:08:42.086906 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-19 22:08:42.086916 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-19 22:08:42.086927 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-19 22:08:42.086938 | orchestrator | 2025-05-19 22:08:42.086949 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-19 22:08:42.086962 | orchestrator | Monday 19 May 2025 22:08:17 +0000 (0:00:04.028) 0:00:46.407 ************ 2025-05-19 22:08:42.086981 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-19 22:08:42.087044 | orchestrator | 2025-05-19 22:08:42.087065 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-19 22:08:42.087084 | orchestrator | Monday 19 May 2025 22:08:17 +0000 (0:00:00.448) 0:00:46.856 ************ 2025-05-19 22:08:42.087103 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:08:42.087119 | orchestrator | 2025-05-19 22:08:42.087131 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-19 22:08:42.087142 | orchestrator | Monday 19 May 2025 22:08:17 +0000 (0:00:00.127) 0:00:46.984 ************ 2025-05-19 22:08:42.087152 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:08:42.087177 | orchestrator | 2025-05-19 22:08:42.087188 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-19 22:08:42.087199 | orchestrator | Monday 19 May 2025 22:08:18 +0000 (0:00:00.319) 0:00:47.303 ************ 2025-05-19 22:08:42.087209 | orchestrator | changed: [testbed-manager] 2025-05-19 22:08:42.087220 | orchestrator | 2025-05-19 22:08:42.087238 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-19 22:08:42.087252 | orchestrator | Monday 19 May 2025 22:08:19 +0000 (0:00:01.676) 0:00:48.980 ************ 2025-05-19 22:08:42.087264 | orchestrator | changed: [testbed-manager] 2025-05-19 22:08:42.087277 | orchestrator | 2025-05-19 22:08:42.087289 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-19 22:08:42.087302 | orchestrator | Monday 19 May 2025 22:08:20 +0000 (0:00:00.709) 0:00:49.690 ************ 2025-05-19 22:08:42.087315 | orchestrator | changed: [testbed-manager] 2025-05-19 22:08:42.087328 | orchestrator | 2025-05-19 22:08:42.087340 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-19 22:08:42.087352 | orchestrator | Monday 19 May 2025 22:08:21 +0000 (0:00:00.613) 0:00:50.303 ************ 2025-05-19 22:08:42.087365 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-19 22:08:42.087377 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-19 22:08:42.087390 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-19 22:08:42.087403 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-19 22:08:42.087415 | orchestrator | 2025-05-19 22:08:42.087427 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:08:42.087440 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:08:42.087454 | orchestrator | 2025-05-19 22:08:42.087466 | orchestrator | 2025-05-19 22:08:42.087530 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:08:42.087545 | orchestrator | Monday 19 May 2025 22:08:22 +0000 (0:00:01.481) 0:00:51.784 ************ 2025-05-19 22:08:42.087559 | orchestrator | =============================================================================== 2025-05-19 22:08:42.087572 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 37.41s 2025-05-19 22:08:42.087585 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.03s 2025-05-19 22:08:42.087597 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2025-05-19 22:08:42.087608 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.48s 2025-05-19 22:08:42.087618 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-05-19 22:08:42.087629 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.21s 2025-05-19 22:08:42.087640 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.13s 2025-05-19 22:08:42.087651 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2025-05-19 22:08:42.087661 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2025-05-19 22:08:42.087672 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2025-05-19 22:08:42.087683 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-05-19 22:08:42.087694 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.32s 2025-05-19 22:08:42.087704 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-19 22:08:42.087715 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-05-19 22:08:42.087726 | orchestrator | 2025-05-19 22:08:42.087737 | orchestrator | 2025-05-19 22:08:42.087748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:08:42.087759 | orchestrator | 2025-05-19 22:08:42.087770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:08:42.087781 | orchestrator | Monday 19 May 2025 22:08:27 +0000 (0:00:00.199) 0:00:00.199 ************ 2025-05-19 22:08:42.087798 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.087809 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.087821 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.087832 | orchestrator | 2025-05-19 22:08:42.087842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:08:42.087853 | orchestrator | Monday 19 May 2025 22:08:27 +0000 (0:00:00.284) 0:00:00.484 ************ 2025-05-19 22:08:42.087864 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 22:08:42.087875 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 22:08:42.087886 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 22:08:42.087896 | orchestrator | 2025-05-19 22:08:42.087907 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-19 22:08:42.087918 | orchestrator | 2025-05-19 22:08:42.087929 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-19 22:08:42.087940 | orchestrator | Monday 19 May 2025 22:08:28 +0000 (0:00:00.655) 0:00:01.140 ************ 2025-05-19 22:08:42.087951 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.087962 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.087973 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.087984 | orchestrator | 2025-05-19 22:08:42.088058 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:08:42.088081 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:08:42.088102 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:08:42.088122 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:08:42.088134 | orchestrator | 2025-05-19 22:08:42.088145 | orchestrator | 2025-05-19 22:08:42.088156 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:08:42.088167 | orchestrator | Monday 19 May 2025 22:08:28 +0000 (0:00:00.661) 0:00:01.802 ************ 2025-05-19 22:08:42.088184 | orchestrator | =============================================================================== 2025-05-19 22:08:42.088195 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.66s 2025-05-19 22:08:42.088206 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-05-19 22:08:42.088217 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-05-19 22:08:42.088228 | orchestrator | 2025-05-19 22:08:42.088239 | orchestrator | 2025-05-19 22:08:42.088250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:08:42.088260 | orchestrator | 2025-05-19 22:08:42.088271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:08:42.088282 | orchestrator | Monday 19 May 2025 22:06:01 +0000 (0:00:00.251) 0:00:00.251 ************ 2025-05-19 22:08:42.088293 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.088304 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.088315 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.088326 | orchestrator | 2025-05-19 22:08:42.088337 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:08:42.088348 | orchestrator | Monday 19 May 2025 22:06:01 +0000 (0:00:00.263) 0:00:00.515 ************ 2025-05-19 22:08:42.088359 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-19 22:08:42.088370 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-19 22:08:42.088381 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-19 22:08:42.088392 | orchestrator | 2025-05-19 22:08:42.088403 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-19 22:08:42.088413 | orchestrator | 2025-05-19 22:08:42.088465 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.088487 | orchestrator | Monday 19 May 2025 22:06:02 +0000 (0:00:00.352) 0:00:00.867 ************ 2025-05-19 22:08:42.088499 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:08:42.088510 | orchestrator | 2025-05-19 22:08:42.088521 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-19 22:08:42.088531 | orchestrator | Monday 19 May 2025 22:06:02 +0000 (0:00:00.517) 0:00:01.384 ************ 2025-05-19 22:08:42.088548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.088564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.088582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.088595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.088713 | orchestrator | 2025-05-19 22:08:42.088725 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-19 22:08:42.088736 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:01.568) 0:00:02.953 ************ 2025-05-19 22:08:42.088747 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-19 22:08:42.088758 | orchestrator | 2025-05-19 22:08:42.088769 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-19 22:08:42.088780 | orchestrator | Monday 19 May 2025 22:06:04 +0000 (0:00:00.791) 0:00:03.744 ************ 2025-05-19 22:08:42.088790 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.088809 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.088820 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.088831 | orchestrator | 2025-05-19 22:08:42.088842 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-19 22:08:42.088853 | orchestrator | Monday 19 May 2025 22:06:05 +0000 (0:00:00.461) 0:00:04.205 ************ 2025-05-19 22:08:42.088864 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:08:42.088874 | orchestrator | 2025-05-19 22:08:42.088885 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.088896 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.691) 0:00:04.897 ************ 2025-05-19 22:08:42.088907 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:08:42.088918 | orchestrator | 2025-05-19 22:08:42.088935 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-19 22:08:42.088946 | orchestrator | Monday 19 May 2025 22:06:06 +0000 (0:00:00.491) 0:00:05.388 ************ 2025-05-19 22:08:42.088959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.088972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089149 | orchestrator | 2025-05-19 22:08:42.089164 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-19 22:08:42.089184 | orchestrator | Monday 19 May 2025 22:06:09 +0000 (0:00:03.362) 0:00:08.751 ************ 2025-05-19 22:08:42.089202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089245 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.089257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089303 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.089321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089356 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.089367 | orchestrator | 2025-05-19 22:08:42.089378 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-19 22:08:42.089389 | orchestrator | Monday 19 May 2025 22:06:10 +0000 (0:00:00.572) 0:00:09.324 ************ 2025-05-19 22:08:42.089401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089447 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.089467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089509 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.089535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-19 22:08:42.089548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-19 22:08:42.089577 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.089588 | orchestrator | 2025-05-19 22:08:42.089599 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-19 22:08:42.089610 | orchestrator | Monday 19 May 2025 22:06:11 +0000 (0:00:00.773) 0:00:10.098 ************ 2025-05-19 22:08:42.089622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089757 | orchestrator | 2025-05-19 22:08:42.089769 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-19 22:08:42.089780 | orchestrator | Monday 19 May 2025 22:06:15 +0000 (0:00:03.942) 0:00:14.040 ************ 2025-05-19 22:08:42.089798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.089869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.089887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.089931 | orchestrator | 2025-05-19 22:08:42.089942 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-19 22:08:42.089953 | orchestrator | Monday 19 May 2025 22:06:20 +0000 (0:00:05.216) 0:00:19.257 ************ 2025-05-19 22:08:42.089964 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.089975 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:08:42.089986 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:08:42.090050 | orchestrator | 2025-05-19 22:08:42.090064 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-19 22:08:42.090076 | orchestrator | Monday 19 May 2025 22:06:21 +0000 (0:00:01.345) 0:00:20.602 ************ 2025-05-19 22:08:42.090086 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.090097 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.090108 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.090119 | orchestrator | 2025-05-19 22:08:42.090138 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-19 22:08:42.090158 | orchestrator | Monday 19 May 2025 22:06:22 +0000 (0:00:00.821) 0:00:21.424 ************ 2025-05-19 22:08:42.090178 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.090198 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.090218 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.090236 | orchestrator | 2025-05-19 22:08:42.090247 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-19 22:08:42.090258 | orchestrator | Monday 19 May 2025 22:06:23 +0000 (0:00:00.618) 0:00:22.043 ************ 2025-05-19 22:08:42.090269 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.090280 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.090291 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.090301 | orchestrator | 2025-05-19 22:08:42.090312 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-19 22:08:42.090329 | orchestrator | Monday 19 May 2025 22:06:23 +0000 (0:00:00.315) 0:00:22.359 ************ 2025-05-19 22:08:42.090342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.090362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.090383 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.090395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.090411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.090423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-19 22:08:42.090442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.090460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.090471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.090482 | orchestrator | 2025-05-19 22:08:42.090493 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.090504 | orchestrator | Monday 19 May 2025 22:06:25 +0000 (0:00:02.293) 0:00:24.652 ************ 2025-05-19 22:08:42.090515 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.090526 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.090537 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.090548 | orchestrator | 2025-05-19 22:08:42.090559 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-19 22:08:42.090570 | orchestrator | Monday 19 May 2025 22:06:26 +0000 (0:00:00.300) 0:00:24.952 ************ 2025-05-19 22:08:42.090581 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 22:08:42.090592 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 22:08:42.090603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-19 22:08:42.090614 | orchestrator | 2025-05-19 22:08:42.090625 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-19 22:08:42.090635 | orchestrator | Monday 19 May 2025 22:06:28 +0000 (0:00:01.864) 0:00:26.817 ************ 2025-05-19 22:08:42.090646 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:08:42.090657 | orchestrator | 2025-05-19 22:08:42.090668 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-19 22:08:42.090679 | orchestrator | Monday 19 May 2025 22:06:28 +0000 (0:00:00.861) 0:00:27.678 ************ 2025-05-19 22:08:42.090689 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.090700 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.090711 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.090722 | orchestrator | 2025-05-19 22:08:42.090732 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-19 22:08:42.090743 | orchestrator | Monday 19 May 2025 22:06:29 +0000 (0:00:00.565) 0:00:28.244 ************ 2025-05-19 22:08:42.090758 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:08:42.090769 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 22:08:42.090780 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 22:08:42.090791 | orchestrator | 2025-05-19 22:08:42.090802 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-19 22:08:42.090813 | orchestrator | Monday 19 May 2025 22:06:30 +0000 (0:00:01.078) 0:00:29.323 ************ 2025-05-19 22:08:42.090823 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.090834 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.090845 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.090863 | orchestrator | 2025-05-19 22:08:42.090874 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-19 22:08:42.090885 | orchestrator | Monday 19 May 2025 22:06:30 +0000 (0:00:00.341) 0:00:29.664 ************ 2025-05-19 22:08:42.090896 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 22:08:42.090906 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 22:08:42.090917 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-19 22:08:42.090928 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 22:08:42.090939 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 22:08:42.090955 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-19 22:08:42.090966 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 22:08:42.090977 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 22:08:42.090988 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-19 22:08:42.091021 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 22:08:42.091032 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 22:08:42.091042 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-19 22:08:42.091053 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 22:08:42.091064 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 22:08:42.091075 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-19 22:08:42.091085 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:08:42.091096 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:08:42.091107 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:08:42.091118 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:08:42.091129 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:08:42.091140 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:08:42.091150 | orchestrator | 2025-05-19 22:08:42.091161 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-19 22:08:42.091177 | orchestrator | Monday 19 May 2025 22:06:39 +0000 (0:00:08.995) 0:00:38.660 ************ 2025-05-19 22:08:42.091196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:08:42.091215 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:08:42.091235 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:08:42.091247 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:08:42.091258 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:08:42.091268 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:08:42.091279 | orchestrator | 2025-05-19 22:08:42.091290 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-19 22:08:42.091309 | orchestrator | Monday 19 May 2025 22:06:42 +0000 (0:00:02.633) 0:00:41.293 ************ 2025-05-19 22:08:42.091326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.091347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.091360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-19 22:08:42.091372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-19 22:08:42.091458 | orchestrator | 2025-05-19 22:08:42.091469 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.091480 | orchestrator | Monday 19 May 2025 22:06:44 +0000 (0:00:02.227) 0:00:43.521 ************ 2025-05-19 22:08:42.091491 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.091502 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.091513 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.091524 | orchestrator | 2025-05-19 22:08:42.091535 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-19 22:08:42.091546 | orchestrator | Monday 19 May 2025 22:06:45 +0000 (0:00:00.329) 0:00:43.850 ************ 2025-05-19 22:08:42.091556 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.091567 | orchestrator | 2025-05-19 22:08:42.091578 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-19 22:08:42.091589 | orchestrator | Monday 19 May 2025 22:06:47 +0000 (0:00:02.288) 0:00:46.139 ************ 2025-05-19 22:08:42.091600 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.091616 | orchestrator | 2025-05-19 22:08:42.091627 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-19 22:08:42.091638 | orchestrator | Monday 19 May 2025 22:06:49 +0000 (0:00:02.557) 0:00:48.696 ************ 2025-05-19 22:08:42.091649 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.091660 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.091671 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.091681 | orchestrator | 2025-05-19 22:08:42.091692 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-19 22:08:42.091703 | orchestrator | Monday 19 May 2025 22:06:50 +0000 (0:00:00.847) 0:00:49.543 ************ 2025-05-19 22:08:42.091714 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.091725 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.091736 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.091747 | orchestrator | 2025-05-19 22:08:42.091757 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-19 22:08:42.091768 | orchestrator | Monday 19 May 2025 22:06:51 +0000 (0:00:00.375) 0:00:49.919 ************ 2025-05-19 22:08:42.091779 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.091790 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.091801 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.091811 | orchestrator | 2025-05-19 22:08:42.091822 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-19 22:08:42.091833 | orchestrator | Monday 19 May 2025 22:06:51 +0000 (0:00:00.353) 0:00:50.273 ************ 2025-05-19 22:08:42.091844 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.091855 | orchestrator | 2025-05-19 22:08:42.091866 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-19 22:08:42.091877 | orchestrator | Monday 19 May 2025 22:07:04 +0000 (0:00:12.830) 0:01:03.104 ************ 2025-05-19 22:08:42.091888 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.091898 | orchestrator | 2025-05-19 22:08:42.091909 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 22:08:42.091920 | orchestrator | Monday 19 May 2025 22:07:13 +0000 (0:00:09.574) 0:01:12.678 ************ 2025-05-19 22:08:42.091931 | orchestrator | 2025-05-19 22:08:42.091946 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 22:08:42.091957 | orchestrator | Monday 19 May 2025 22:07:14 +0000 (0:00:00.261) 0:01:12.940 ************ 2025-05-19 22:08:42.091968 | orchestrator | 2025-05-19 22:08:42.091979 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-19 22:08:42.092008 | orchestrator | Monday 19 May 2025 22:07:14 +0000 (0:00:00.062) 0:01:13.002 ************ 2025-05-19 22:08:42.092025 | orchestrator | 2025-05-19 22:08:42.092036 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-19 22:08:42.092047 | orchestrator | Monday 19 May 2025 22:07:14 +0000 (0:00:00.063) 0:01:13.066 ************ 2025-05-19 22:08:42.092058 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.092069 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:08:42.092080 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:08:42.092090 | orchestrator | 2025-05-19 22:08:42.092101 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-19 22:08:42.092112 | orchestrator | Monday 19 May 2025 22:07:34 +0000 (0:00:20.577) 0:01:33.643 ************ 2025-05-19 22:08:42.092123 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.092134 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:08:42.092144 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:08:42.092155 | orchestrator | 2025-05-19 22:08:42.092166 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-19 22:08:42.092177 | orchestrator | Monday 19 May 2025 22:07:45 +0000 (0:00:10.850) 0:01:44.494 ************ 2025-05-19 22:08:42.092188 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.092199 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:08:42.092225 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:08:42.092246 | orchestrator | 2025-05-19 22:08:42.092266 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.092285 | orchestrator | Monday 19 May 2025 22:07:57 +0000 (0:00:11.475) 0:01:55.970 ************ 2025-05-19 22:08:42.092296 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:08:42.092307 | orchestrator | 2025-05-19 22:08:42.092318 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-19 22:08:42.092329 | orchestrator | Monday 19 May 2025 22:07:57 +0000 (0:00:00.818) 0:01:56.789 ************ 2025-05-19 22:08:42.092340 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:08:42.092350 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.092361 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:08:42.092372 | orchestrator | 2025-05-19 22:08:42.092383 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-19 22:08:42.092394 | orchestrator | Monday 19 May 2025 22:07:58 +0000 (0:00:00.720) 0:01:57.509 ************ 2025-05-19 22:08:42.092404 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:08:42.092415 | orchestrator | 2025-05-19 22:08:42.092426 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-19 22:08:42.092437 | orchestrator | Monday 19 May 2025 22:08:00 +0000 (0:00:01.819) 0:01:59.329 ************ 2025-05-19 22:08:42.092448 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-19 22:08:42.092458 | orchestrator | 2025-05-19 22:08:42.092469 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-19 22:08:42.092480 | orchestrator | Monday 19 May 2025 22:08:10 +0000 (0:00:09.815) 0:02:09.144 ************ 2025-05-19 22:08:42.092491 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-19 22:08:42.092501 | orchestrator | 2025-05-19 22:08:42.092512 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-19 22:08:42.092523 | orchestrator | Monday 19 May 2025 22:08:29 +0000 (0:00:19.569) 0:02:28.713 ************ 2025-05-19 22:08:42.092534 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-19 22:08:42.092544 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-19 22:08:42.092555 | orchestrator | 2025-05-19 22:08:42.092566 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-19 22:08:42.092577 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:06.078) 0:02:34.792 ************ 2025-05-19 22:08:42.092587 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.092598 | orchestrator | 2025-05-19 22:08:42.092609 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-19 22:08:42.092620 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:00.320) 0:02:35.113 ************ 2025-05-19 22:08:42.092630 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.092641 | orchestrator | 2025-05-19 22:08:42.092652 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-19 22:08:42.092663 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:00.240) 0:02:35.353 ************ 2025-05-19 22:08:42.092673 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.092684 | orchestrator | 2025-05-19 22:08:42.092695 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-19 22:08:42.092706 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:00.348) 0:02:35.702 ************ 2025-05-19 22:08:42.092716 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.092727 | orchestrator | 2025-05-19 22:08:42.092738 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-19 22:08:42.092749 | orchestrator | Monday 19 May 2025 22:08:37 +0000 (0:00:00.573) 0:02:36.275 ************ 2025-05-19 22:08:42.092759 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:08:42.092770 | orchestrator | 2025-05-19 22:08:42.092781 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-19 22:08:42.092792 | orchestrator | Monday 19 May 2025 22:08:40 +0000 (0:00:03.212) 0:02:39.488 ************ 2025-05-19 22:08:42.092809 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:08:42.092820 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:08:42.092831 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:08:42.092841 | orchestrator | 2025-05-19 22:08:42.092852 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:08:42.092872 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-19 22:08:42.092883 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 22:08:42.092894 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-19 22:08:42.092905 | orchestrator | 2025-05-19 22:08:42.092916 | orchestrator | 2025-05-19 22:08:42.092926 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:08:42.092937 | orchestrator | Monday 19 May 2025 22:08:41 +0000 (0:00:00.693) 0:02:40.181 ************ 2025-05-19 22:08:42.092948 | orchestrator | =============================================================================== 2025-05-19 22:08:42.092958 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 20.58s 2025-05-19 22:08:42.092969 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.57s 2025-05-19 22:08:42.092979 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.83s 2025-05-19 22:08:42.093043 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.48s 2025-05-19 22:08:42.093057 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.85s 2025-05-19 22:08:42.093074 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.82s 2025-05-19 22:08:42.093085 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.57s 2025-05-19 22:08:42.093096 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.00s 2025-05-19 22:08:42.093107 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.08s 2025-05-19 22:08:42.093117 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.22s 2025-05-19 22:08:42.093128 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.94s 2025-05-19 22:08:42.093139 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.36s 2025-05-19 22:08:42.093150 | orchestrator | keystone : Creating default user role ----------------------------------- 3.21s 2025-05-19 22:08:42.093161 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.63s 2025-05-19 22:08:42.093172 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.56s 2025-05-19 22:08:42.093182 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.29s 2025-05-19 22:08:42.093193 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.29s 2025-05-19 22:08:42.093204 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.23s 2025-05-19 22:08:42.093214 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.86s 2025-05-19 22:08:42.093223 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.82s 2025-05-19 22:08:42.093236 | orchestrator | 2025-05-19 22:08:42 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:42.093253 | orchestrator | 2025-05-19 22:08:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:45.127340 | orchestrator | 2025-05-19 22:08:45 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:45.129772 | orchestrator | 2025-05-19 22:08:45 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:45.131680 | orchestrator | 2025-05-19 22:08:45 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:45.134397 | orchestrator | 2025-05-19 22:08:45 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:08:45.136723 | orchestrator | 2025-05-19 22:08:45 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:45.137214 | orchestrator | 2025-05-19 22:08:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:48.190259 | orchestrator | 2025-05-19 22:08:48 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:48.191431 | orchestrator | 2025-05-19 22:08:48 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:48.193010 | orchestrator | 2025-05-19 22:08:48 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:48.194731 | orchestrator | 2025-05-19 22:08:48 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:08:48.196874 | orchestrator | 2025-05-19 22:08:48 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:48.196899 | orchestrator | 2025-05-19 22:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:51.250251 | orchestrator | 2025-05-19 22:08:51 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:51.252712 | orchestrator | 2025-05-19 22:08:51 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:51.252755 | orchestrator | 2025-05-19 22:08:51 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:51.257517 | orchestrator | 2025-05-19 22:08:51 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:08:51.260617 | orchestrator | 2025-05-19 22:08:51 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:51.260848 | orchestrator | 2025-05-19 22:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:54.308288 | orchestrator | 2025-05-19 22:08:54 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:54.309928 | orchestrator | 2025-05-19 22:08:54 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:54.310832 | orchestrator | 2025-05-19 22:08:54 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:54.312482 | orchestrator | 2025-05-19 22:08:54 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:08:54.316325 | orchestrator | 2025-05-19 22:08:54 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:54.316372 | orchestrator | 2025-05-19 22:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:08:57.339722 | orchestrator | 2025-05-19 22:08:57 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:08:57.339922 | orchestrator | 2025-05-19 22:08:57 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:08:57.340593 | orchestrator | 2025-05-19 22:08:57 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:08:57.341038 | orchestrator | 2025-05-19 22:08:57 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:08:57.341923 | orchestrator | 2025-05-19 22:08:57 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:08:57.342081 | orchestrator | 2025-05-19 22:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:00.366700 | orchestrator | 2025-05-19 22:09:00 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:00.366933 | orchestrator | 2025-05-19 22:09:00 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:00.367589 | orchestrator | 2025-05-19 22:09:00 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:09:00.368137 | orchestrator | 2025-05-19 22:09:00 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:00.368752 | orchestrator | 2025-05-19 22:09:00 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:00.368779 | orchestrator | 2025-05-19 22:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:03.407687 | orchestrator | 2025-05-19 22:09:03 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:03.408154 | orchestrator | 2025-05-19 22:09:03 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:03.410085 | orchestrator | 2025-05-19 22:09:03 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state STARTED 2025-05-19 22:09:03.411043 | orchestrator | 2025-05-19 22:09:03 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:03.411565 | orchestrator | 2025-05-19 22:09:03 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:03.411589 | orchestrator | 2025-05-19 22:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:06.438296 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:06.438442 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:06.439112 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:06.439493 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task aa401f09-a202-4f9b-a1a2-84cd7fe9c3da is in state SUCCESS 2025-05-19 22:09:06.440080 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:06.440580 | orchestrator | 2025-05-19 22:09:06 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:06.440596 | orchestrator | 2025-05-19 22:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:09.465368 | orchestrator | 2025-05-19 22:09:09 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:09.465468 | orchestrator | 2025-05-19 22:09:09 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:09.465830 | orchestrator | 2025-05-19 22:09:09 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:09.466383 | orchestrator | 2025-05-19 22:09:09 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:09.466995 | orchestrator | 2025-05-19 22:09:09 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:09.467028 | orchestrator | 2025-05-19 22:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:12.500351 | orchestrator | 2025-05-19 22:09:12 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:12.501677 | orchestrator | 2025-05-19 22:09:12 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:12.504063 | orchestrator | 2025-05-19 22:09:12 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:12.505189 | orchestrator | 2025-05-19 22:09:12 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:12.507029 | orchestrator | 2025-05-19 22:09:12 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:12.509538 | orchestrator | 2025-05-19 22:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:15.550676 | orchestrator | 2025-05-19 22:09:15 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:15.551213 | orchestrator | 2025-05-19 22:09:15 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:15.555345 | orchestrator | 2025-05-19 22:09:15 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:15.555900 | orchestrator | 2025-05-19 22:09:15 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:15.556697 | orchestrator | 2025-05-19 22:09:15 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:15.556731 | orchestrator | 2025-05-19 22:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:18.593360 | orchestrator | 2025-05-19 22:09:18 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:18.593693 | orchestrator | 2025-05-19 22:09:18 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:18.594360 | orchestrator | 2025-05-19 22:09:18 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:18.595038 | orchestrator | 2025-05-19 22:09:18 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:18.595513 | orchestrator | 2025-05-19 22:09:18 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:18.595536 | orchestrator | 2025-05-19 22:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:21.633628 | orchestrator | 2025-05-19 22:09:21 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:21.633723 | orchestrator | 2025-05-19 22:09:21 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:21.634382 | orchestrator | 2025-05-19 22:09:21 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:21.634913 | orchestrator | 2025-05-19 22:09:21 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:21.635492 | orchestrator | 2025-05-19 22:09:21 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:21.635514 | orchestrator | 2025-05-19 22:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:24.662367 | orchestrator | 2025-05-19 22:09:24 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:24.663278 | orchestrator | 2025-05-19 22:09:24 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:24.663327 | orchestrator | 2025-05-19 22:09:24 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:24.665499 | orchestrator | 2025-05-19 22:09:24 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:24.666235 | orchestrator | 2025-05-19 22:09:24 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:24.666336 | orchestrator | 2025-05-19 22:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:27.691772 | orchestrator | 2025-05-19 22:09:27 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:27.693685 | orchestrator | 2025-05-19 22:09:27 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:27.694411 | orchestrator | 2025-05-19 22:09:27 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:27.697970 | orchestrator | 2025-05-19 22:09:27 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:27.698366 | orchestrator | 2025-05-19 22:09:27 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:27.698392 | orchestrator | 2025-05-19 22:09:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:30.721481 | orchestrator | 2025-05-19 22:09:30 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:30.721794 | orchestrator | 2025-05-19 22:09:30 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:30.724847 | orchestrator | 2025-05-19 22:09:30 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:30.725388 | orchestrator | 2025-05-19 22:09:30 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:30.728047 | orchestrator | 2025-05-19 22:09:30 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:30.728087 | orchestrator | 2025-05-19 22:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:33.769117 | orchestrator | 2025-05-19 22:09:33 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:33.770150 | orchestrator | 2025-05-19 22:09:33 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:33.770665 | orchestrator | 2025-05-19 22:09:33 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:33.771299 | orchestrator | 2025-05-19 22:09:33 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:33.771871 | orchestrator | 2025-05-19 22:09:33 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:33.771890 | orchestrator | 2025-05-19 22:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:36.811132 | orchestrator | 2025-05-19 22:09:36 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:36.811399 | orchestrator | 2025-05-19 22:09:36 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:36.811806 | orchestrator | 2025-05-19 22:09:36 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state STARTED 2025-05-19 22:09:36.812412 | orchestrator | 2025-05-19 22:09:36 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:36.813090 | orchestrator | 2025-05-19 22:09:36 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:36.813337 | orchestrator | 2025-05-19 22:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:39.842343 | orchestrator | 2025-05-19 22:09:39 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:39.843129 | orchestrator | 2025-05-19 22:09:39 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:39.843175 | orchestrator | 2025-05-19 22:09:39 | INFO  | Task ac4020ed-d23f-4e10-80c7-4257264992de is in state SUCCESS 2025-05-19 22:09:39.843189 | orchestrator | 2025-05-19 22:09:39.843199 | orchestrator | 2025-05-19 22:09:39.843209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:09:39.843219 | orchestrator | 2025-05-19 22:09:39.843229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:09:39.843239 | orchestrator | Monday 19 May 2025 22:08:34 +0000 (0:00:00.259) 0:00:00.259 ************ 2025-05-19 22:09:39.843249 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:09:39.843285 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:09:39.843302 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:09:39.843318 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:09:39.843334 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:09:39.843350 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:09:39.843365 | orchestrator | ok: [testbed-manager] 2025-05-19 22:09:39.843381 | orchestrator | 2025-05-19 22:09:39.843397 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:09:39.843413 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:00.814) 0:00:01.074 ************ 2025-05-19 22:09:39.843430 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843446 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843463 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843494 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843512 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843527 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843542 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-19 22:09:39.843558 | orchestrator | 2025-05-19 22:09:39.843575 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-19 22:09:39.843591 | orchestrator | 2025-05-19 22:09:39.843608 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-19 22:09:39.843624 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:01.007) 0:00:02.081 ************ 2025-05-19 22:09:39.843641 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2025-05-19 22:09:39.843658 | orchestrator | 2025-05-19 22:09:39.843674 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-19 22:09:39.843691 | orchestrator | Monday 19 May 2025 22:08:38 +0000 (0:00:01.952) 0:00:04.034 ************ 2025-05-19 22:09:39.843706 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-19 22:09:39.843722 | orchestrator | 2025-05-19 22:09:39.843737 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-19 22:09:39.843755 | orchestrator | Monday 19 May 2025 22:08:41 +0000 (0:00:03.262) 0:00:07.297 ************ 2025-05-19 22:09:39.843776 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-19 22:09:39.843795 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-19 22:09:39.843814 | orchestrator | 2025-05-19 22:09:39.843831 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-19 22:09:39.843849 | orchestrator | Monday 19 May 2025 22:08:47 +0000 (0:00:05.206) 0:00:12.503 ************ 2025-05-19 22:09:39.843868 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:09:39.843886 | orchestrator | 2025-05-19 22:09:39.843936 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-19 22:09:39.843955 | orchestrator | Monday 19 May 2025 22:08:49 +0000 (0:00:02.745) 0:00:15.249 ************ 2025-05-19 22:09:39.843974 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:09:39.843994 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-19 22:09:39.844014 | orchestrator | 2025-05-19 22:09:39.844034 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-19 22:09:39.844055 | orchestrator | Monday 19 May 2025 22:08:53 +0000 (0:00:03.414) 0:00:18.663 ************ 2025-05-19 22:09:39.844076 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:09:39.844096 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-19 22:09:39.844115 | orchestrator | 2025-05-19 22:09:39.844130 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-19 22:09:39.844358 | orchestrator | Monday 19 May 2025 22:08:58 +0000 (0:00:05.511) 0:00:24.175 ************ 2025-05-19 22:09:39.844388 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-19 22:09:39.844437 | orchestrator | 2025-05-19 22:09:39.844447 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:09:39.844458 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844468 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844479 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844489 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844499 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844523 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844533 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.844672 | orchestrator | 2025-05-19 22:09:39.844686 | orchestrator | 2025-05-19 22:09:39.844697 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:09:39.844707 | orchestrator | Monday 19 May 2025 22:09:04 +0000 (0:00:05.787) 0:00:29.962 ************ 2025-05-19 22:09:39.844716 | orchestrator | =============================================================================== 2025-05-19 22:09:39.844726 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.79s 2025-05-19 22:09:39.844736 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.51s 2025-05-19 22:09:39.844746 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.21s 2025-05-19 22:09:39.844755 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.41s 2025-05-19 22:09:39.844765 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.26s 2025-05-19 22:09:39.844775 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.75s 2025-05-19 22:09:39.844792 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.95s 2025-05-19 22:09:39.844803 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.01s 2025-05-19 22:09:39.844812 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2025-05-19 22:09:39.844822 | orchestrator | 2025-05-19 22:09:39.844831 | orchestrator | 2025-05-19 22:09:39.844841 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-19 22:09:39.844851 | orchestrator | 2025-05-19 22:09:39.844860 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-19 22:09:39.844870 | orchestrator | Monday 19 May 2025 22:08:27 +0000 (0:00:00.257) 0:00:00.257 ************ 2025-05-19 22:09:39.844880 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.844890 | orchestrator | 2025-05-19 22:09:39.844925 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-19 22:09:39.844936 | orchestrator | Monday 19 May 2025 22:08:28 +0000 (0:00:01.566) 0:00:01.823 ************ 2025-05-19 22:09:39.844946 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.844955 | orchestrator | 2025-05-19 22:09:39.844965 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-19 22:09:39.844975 | orchestrator | Monday 19 May 2025 22:08:29 +0000 (0:00:01.029) 0:00:02.852 ************ 2025-05-19 22:09:39.844985 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845005 | orchestrator | 2025-05-19 22:09:39.845015 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-19 22:09:39.845025 | orchestrator | Monday 19 May 2025 22:08:30 +0000 (0:00:01.052) 0:00:03.905 ************ 2025-05-19 22:09:39.845035 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845045 | orchestrator | 2025-05-19 22:09:39.845054 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-19 22:09:39.845064 | orchestrator | Monday 19 May 2025 22:08:32 +0000 (0:00:01.261) 0:00:05.166 ************ 2025-05-19 22:09:39.845074 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845084 | orchestrator | 2025-05-19 22:09:39.845094 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-19 22:09:39.845103 | orchestrator | Monday 19 May 2025 22:08:33 +0000 (0:00:01.413) 0:00:06.579 ************ 2025-05-19 22:09:39.845174 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845185 | orchestrator | 2025-05-19 22:09:39.845195 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-19 22:09:39.845204 | orchestrator | Monday 19 May 2025 22:08:34 +0000 (0:00:00.841) 0:00:07.420 ************ 2025-05-19 22:09:39.845214 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845224 | orchestrator | 2025-05-19 22:09:39.845234 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-19 22:09:39.845244 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:01.047) 0:00:08.468 ************ 2025-05-19 22:09:39.845253 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845263 | orchestrator | 2025-05-19 22:09:39.845273 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-19 22:09:39.845283 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:00.991) 0:00:09.459 ************ 2025-05-19 22:09:39.845292 | orchestrator | changed: [testbed-manager] 2025-05-19 22:09:39.845302 | orchestrator | 2025-05-19 22:09:39.845312 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-19 22:09:39.845324 | orchestrator | Monday 19 May 2025 22:09:15 +0000 (0:00:38.738) 0:00:48.198 ************ 2025-05-19 22:09:39.845335 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:09:39.845346 | orchestrator | 2025-05-19 22:09:39.845357 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 22:09:39.845368 | orchestrator | 2025-05-19 22:09:39.845379 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 22:09:39.845390 | orchestrator | Monday 19 May 2025 22:09:15 +0000 (0:00:00.157) 0:00:48.355 ************ 2025-05-19 22:09:39.845402 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:09:39.845413 | orchestrator | 2025-05-19 22:09:39.845424 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 22:09:39.845435 | orchestrator | 2025-05-19 22:09:39.845447 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 22:09:39.845458 | orchestrator | Monday 19 May 2025 22:09:26 +0000 (0:00:11.437) 0:00:59.793 ************ 2025-05-19 22:09:39.845469 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:09:39.845480 | orchestrator | 2025-05-19 22:09:39.845491 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-19 22:09:39.845503 | orchestrator | 2025-05-19 22:09:39.845519 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-19 22:09:39.845551 | orchestrator | Monday 19 May 2025 22:09:37 +0000 (0:00:11.219) 0:01:11.013 ************ 2025-05-19 22:09:39.845569 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:09:39.845587 | orchestrator | 2025-05-19 22:09:39.845605 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:09:39.845625 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-19 22:09:39.845645 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.845676 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.845691 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:09:39.845701 | orchestrator | 2025-05-19 22:09:39.845711 | orchestrator | 2025-05-19 22:09:39.845720 | orchestrator | 2025-05-19 22:09:39.845730 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:09:39.845739 | orchestrator | Monday 19 May 2025 22:09:38 +0000 (0:00:01.112) 0:01:12.126 ************ 2025-05-19 22:09:39.845755 | orchestrator | =============================================================================== 2025-05-19 22:09:39.845765 | orchestrator | Create admin user ------------------------------------------------------ 38.74s 2025-05-19 22:09:39.845775 | orchestrator | Restart ceph manager service ------------------------------------------- 23.77s 2025-05-19 22:09:39.845784 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.57s 2025-05-19 22:09:39.845794 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.41s 2025-05-19 22:09:39.845803 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.27s 2025-05-19 22:09:39.845813 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2025-05-19 22:09:39.845822 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.05s 2025-05-19 22:09:39.845831 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2025-05-19 22:09:39.845841 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.99s 2025-05-19 22:09:39.845857 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.84s 2025-05-19 22:09:39.845872 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-05-19 22:09:39.845887 | orchestrator | 2025-05-19 22:09:39 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:39.845950 | orchestrator | 2025-05-19 22:09:39 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:39.845965 | orchestrator | 2025-05-19 22:09:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:42.892348 | orchestrator | 2025-05-19 22:09:42 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:42.894086 | orchestrator | 2025-05-19 22:09:42 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:42.895537 | orchestrator | 2025-05-19 22:09:42 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:42.895929 | orchestrator | 2025-05-19 22:09:42 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:42.895952 | orchestrator | 2025-05-19 22:09:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:45.923508 | orchestrator | 2025-05-19 22:09:45 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:45.923597 | orchestrator | 2025-05-19 22:09:45 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:45.926386 | orchestrator | 2025-05-19 22:09:45 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:45.926778 | orchestrator | 2025-05-19 22:09:45 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:45.926804 | orchestrator | 2025-05-19 22:09:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:48.962314 | orchestrator | 2025-05-19 22:09:48 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:48.965050 | orchestrator | 2025-05-19 22:09:48 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:48.965112 | orchestrator | 2025-05-19 22:09:48 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:48.965126 | orchestrator | 2025-05-19 22:09:48 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:48.965137 | orchestrator | 2025-05-19 22:09:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:51.987434 | orchestrator | 2025-05-19 22:09:51 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:51.988730 | orchestrator | 2025-05-19 22:09:51 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:51.988758 | orchestrator | 2025-05-19 22:09:51 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:51.989223 | orchestrator | 2025-05-19 22:09:51 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:51.990011 | orchestrator | 2025-05-19 22:09:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:55.019468 | orchestrator | 2025-05-19 22:09:55 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:55.019586 | orchestrator | 2025-05-19 22:09:55 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:55.020031 | orchestrator | 2025-05-19 22:09:55 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:55.020511 | orchestrator | 2025-05-19 22:09:55 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:55.020560 | orchestrator | 2025-05-19 22:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:09:58.046535 | orchestrator | 2025-05-19 22:09:58 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:09:58.046730 | orchestrator | 2025-05-19 22:09:58 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:09:58.046764 | orchestrator | 2025-05-19 22:09:58 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:09:58.047393 | orchestrator | 2025-05-19 22:09:58 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:09:58.047419 | orchestrator | 2025-05-19 22:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:01.071197 | orchestrator | 2025-05-19 22:10:01 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:01.071289 | orchestrator | 2025-05-19 22:10:01 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:01.071304 | orchestrator | 2025-05-19 22:10:01 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:01.072839 | orchestrator | 2025-05-19 22:10:01 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:01.072917 | orchestrator | 2025-05-19 22:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:04.105345 | orchestrator | 2025-05-19 22:10:04 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:04.105438 | orchestrator | 2025-05-19 22:10:04 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:04.106118 | orchestrator | 2025-05-19 22:10:04 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:04.106601 | orchestrator | 2025-05-19 22:10:04 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:04.106623 | orchestrator | 2025-05-19 22:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:07.142847 | orchestrator | 2025-05-19 22:10:07 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:07.143922 | orchestrator | 2025-05-19 22:10:07 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:07.144947 | orchestrator | 2025-05-19 22:10:07 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:07.146366 | orchestrator | 2025-05-19 22:10:07 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:07.146467 | orchestrator | 2025-05-19 22:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:10.194310 | orchestrator | 2025-05-19 22:10:10 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:10.195794 | orchestrator | 2025-05-19 22:10:10 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:10.197215 | orchestrator | 2025-05-19 22:10:10 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:10.198904 | orchestrator | 2025-05-19 22:10:10 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:10.198944 | orchestrator | 2025-05-19 22:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:13.247520 | orchestrator | 2025-05-19 22:10:13 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:13.251108 | orchestrator | 2025-05-19 22:10:13 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:13.251310 | orchestrator | 2025-05-19 22:10:13 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:13.252637 | orchestrator | 2025-05-19 22:10:13 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:13.252772 | orchestrator | 2025-05-19 22:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:16.293699 | orchestrator | 2025-05-19 22:10:16 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:16.295935 | orchestrator | 2025-05-19 22:10:16 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:16.297041 | orchestrator | 2025-05-19 22:10:16 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:16.297675 | orchestrator | 2025-05-19 22:10:16 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:16.297703 | orchestrator | 2025-05-19 22:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:19.349541 | orchestrator | 2025-05-19 22:10:19 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:19.350598 | orchestrator | 2025-05-19 22:10:19 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:19.352034 | orchestrator | 2025-05-19 22:10:19 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:19.353777 | orchestrator | 2025-05-19 22:10:19 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:19.353820 | orchestrator | 2025-05-19 22:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:22.399055 | orchestrator | 2025-05-19 22:10:22 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:22.401729 | orchestrator | 2025-05-19 22:10:22 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:22.402928 | orchestrator | 2025-05-19 22:10:22 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:22.404660 | orchestrator | 2025-05-19 22:10:22 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:22.405373 | orchestrator | 2025-05-19 22:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:25.464369 | orchestrator | 2025-05-19 22:10:25 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:25.465237 | orchestrator | 2025-05-19 22:10:25 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:25.466333 | orchestrator | 2025-05-19 22:10:25 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:25.467741 | orchestrator | 2025-05-19 22:10:25 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:25.467768 | orchestrator | 2025-05-19 22:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:28.512129 | orchestrator | 2025-05-19 22:10:28 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:28.512464 | orchestrator | 2025-05-19 22:10:28 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:28.513502 | orchestrator | 2025-05-19 22:10:28 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:28.514432 | orchestrator | 2025-05-19 22:10:28 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:28.514474 | orchestrator | 2025-05-19 22:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:31.559263 | orchestrator | 2025-05-19 22:10:31 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:31.560128 | orchestrator | 2025-05-19 22:10:31 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:31.560737 | orchestrator | 2025-05-19 22:10:31 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:31.561415 | orchestrator | 2025-05-19 22:10:31 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:31.561578 | orchestrator | 2025-05-19 22:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:34.599318 | orchestrator | 2025-05-19 22:10:34 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:34.601096 | orchestrator | 2025-05-19 22:10:34 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:34.603949 | orchestrator | 2025-05-19 22:10:34 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:34.606256 | orchestrator | 2025-05-19 22:10:34 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:34.606293 | orchestrator | 2025-05-19 22:10:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:37.648029 | orchestrator | 2025-05-19 22:10:37 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:37.649257 | orchestrator | 2025-05-19 22:10:37 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:37.649691 | orchestrator | 2025-05-19 22:10:37 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:37.650662 | orchestrator | 2025-05-19 22:10:37 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:37.650987 | orchestrator | 2025-05-19 22:10:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:40.701508 | orchestrator | 2025-05-19 22:10:40 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:40.701624 | orchestrator | 2025-05-19 22:10:40 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:40.702346 | orchestrator | 2025-05-19 22:10:40 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:40.704556 | orchestrator | 2025-05-19 22:10:40 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:40.704587 | orchestrator | 2025-05-19 22:10:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:43.746721 | orchestrator | 2025-05-19 22:10:43 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:43.747404 | orchestrator | 2025-05-19 22:10:43 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:43.748149 | orchestrator | 2025-05-19 22:10:43 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:43.748984 | orchestrator | 2025-05-19 22:10:43 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:43.749021 | orchestrator | 2025-05-19 22:10:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:46.773902 | orchestrator | 2025-05-19 22:10:46 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:46.773987 | orchestrator | 2025-05-19 22:10:46 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:46.774261 | orchestrator | 2025-05-19 22:10:46 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:46.774965 | orchestrator | 2025-05-19 22:10:46 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:46.775717 | orchestrator | 2025-05-19 22:10:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:49.806854 | orchestrator | 2025-05-19 22:10:49 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:49.806946 | orchestrator | 2025-05-19 22:10:49 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:49.807338 | orchestrator | 2025-05-19 22:10:49 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:49.807975 | orchestrator | 2025-05-19 22:10:49 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:49.808649 | orchestrator | 2025-05-19 22:10:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:52.837332 | orchestrator | 2025-05-19 22:10:52 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:52.837852 | orchestrator | 2025-05-19 22:10:52 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:52.838539 | orchestrator | 2025-05-19 22:10:52 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:52.839327 | orchestrator | 2025-05-19 22:10:52 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:52.840135 | orchestrator | 2025-05-19 22:10:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:55.880914 | orchestrator | 2025-05-19 22:10:55 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:55.881112 | orchestrator | 2025-05-19 22:10:55 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:55.881907 | orchestrator | 2025-05-19 22:10:55 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:55.882226 | orchestrator | 2025-05-19 22:10:55 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:55.882248 | orchestrator | 2025-05-19 22:10:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:10:58.915435 | orchestrator | 2025-05-19 22:10:58 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:10:58.915619 | orchestrator | 2025-05-19 22:10:58 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:10:58.915707 | orchestrator | 2025-05-19 22:10:58 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:10:58.917809 | orchestrator | 2025-05-19 22:10:58 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:10:58.917833 | orchestrator | 2025-05-19 22:10:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:01.952437 | orchestrator | 2025-05-19 22:11:01 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:01.953207 | orchestrator | 2025-05-19 22:11:01 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:01.953981 | orchestrator | 2025-05-19 22:11:01 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:01.954990 | orchestrator | 2025-05-19 22:11:01 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:01.955223 | orchestrator | 2025-05-19 22:11:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:04.998233 | orchestrator | 2025-05-19 22:11:04 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:04.998434 | orchestrator | 2025-05-19 22:11:04 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:04.999532 | orchestrator | 2025-05-19 22:11:04 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:04.999955 | orchestrator | 2025-05-19 22:11:04 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:04.999979 | orchestrator | 2025-05-19 22:11:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:08.055168 | orchestrator | 2025-05-19 22:11:08 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:08.057536 | orchestrator | 2025-05-19 22:11:08 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:08.058335 | orchestrator | 2025-05-19 22:11:08 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:08.059196 | orchestrator | 2025-05-19 22:11:08 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:08.059588 | orchestrator | 2025-05-19 22:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:11.112315 | orchestrator | 2025-05-19 22:11:11 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:11.112567 | orchestrator | 2025-05-19 22:11:11 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:11.113649 | orchestrator | 2025-05-19 22:11:11 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:11.114702 | orchestrator | 2025-05-19 22:11:11 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:11.114908 | orchestrator | 2025-05-19 22:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:14.166869 | orchestrator | 2025-05-19 22:11:14 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:14.168866 | orchestrator | 2025-05-19 22:11:14 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:14.171720 | orchestrator | 2025-05-19 22:11:14 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:14.174433 | orchestrator | 2025-05-19 22:11:14 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:14.174597 | orchestrator | 2025-05-19 22:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:17.236508 | orchestrator | 2025-05-19 22:11:17 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:17.237051 | orchestrator | 2025-05-19 22:11:17 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:17.238364 | orchestrator | 2025-05-19 22:11:17 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:17.240559 | orchestrator | 2025-05-19 22:11:17 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:17.240599 | orchestrator | 2025-05-19 22:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:20.299270 | orchestrator | 2025-05-19 22:11:20 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:20.302836 | orchestrator | 2025-05-19 22:11:20 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state STARTED 2025-05-19 22:11:20.308027 | orchestrator | 2025-05-19 22:11:20 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:20.310481 | orchestrator | 2025-05-19 22:11:20 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:20.310648 | orchestrator | 2025-05-19 22:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:23.358257 | orchestrator | 2025-05-19 22:11:23 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:23.365069 | orchestrator | 2025-05-19 22:11:23.365195 | orchestrator | 2025-05-19 22:11:23.365213 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:11:23.365226 | orchestrator | 2025-05-19 22:11:23.365259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:11:23.365266 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:00.259) 0:00:00.259 ************ 2025-05-19 22:11:23.365272 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:11:23.365280 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:11:23.365286 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:11:23.365292 | orchestrator | 2025-05-19 22:11:23.365302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:11:23.365314 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:00.321) 0:00:00.580 ************ 2025-05-19 22:11:23.365321 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-19 22:11:23.365327 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-19 22:11:23.365334 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-19 22:11:23.365341 | orchestrator | 2025-05-19 22:11:23.365347 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-19 22:11:23.365354 | orchestrator | 2025-05-19 22:11:23.365373 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 22:11:23.365386 | orchestrator | Monday 19 May 2025 22:08:35 +0000 (0:00:00.402) 0:00:00.983 ************ 2025-05-19 22:11:23.365398 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:11:23.365411 | orchestrator | 2025-05-19 22:11:23.365421 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-19 22:11:23.365430 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:00.970) 0:00:01.953 ************ 2025-05-19 22:11:23.365440 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-19 22:11:23.365450 | orchestrator | 2025-05-19 22:11:23.365459 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-19 22:11:23.365469 | orchestrator | Monday 19 May 2025 22:08:40 +0000 (0:00:03.839) 0:00:05.793 ************ 2025-05-19 22:11:23.365480 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-19 22:11:23.365510 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-19 22:11:23.365522 | orchestrator | 2025-05-19 22:11:23.365532 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-19 22:11:23.365543 | orchestrator | Monday 19 May 2025 22:08:45 +0000 (0:00:05.166) 0:00:10.959 ************ 2025-05-19 22:11:23.365555 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-19 22:11:23.365565 | orchestrator | 2025-05-19 22:11:23.365575 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-19 22:11:23.365587 | orchestrator | Monday 19 May 2025 22:08:48 +0000 (0:00:02.966) 0:00:13.925 ************ 2025-05-19 22:11:23.365599 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:11:23.365609 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-19 22:11:23.365618 | orchestrator | 2025-05-19 22:11:23.365628 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-19 22:11:23.365638 | orchestrator | Monday 19 May 2025 22:08:52 +0000 (0:00:03.655) 0:00:17.581 ************ 2025-05-19 22:11:23.365648 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:11:23.365658 | orchestrator | 2025-05-19 22:11:23.365669 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-19 22:11:23.365679 | orchestrator | Monday 19 May 2025 22:08:55 +0000 (0:00:02.965) 0:00:20.547 ************ 2025-05-19 22:11:23.365688 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-19 22:11:23.365699 | orchestrator | 2025-05-19 22:11:23.365709 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-19 22:11:23.365719 | orchestrator | Monday 19 May 2025 22:08:59 +0000 (0:00:03.973) 0:00:24.520 ************ 2025-05-19 22:11:23.365784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.365796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.365814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.365821 | orchestrator | 2025-05-19 22:11:23.365829 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 22:11:23.365835 | orchestrator | Monday 19 May 2025 22:09:06 +0000 (0:00:06.838) 0:00:31.358 ************ 2025-05-19 22:11:23.365841 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:11:23.365848 | orchestrator | 2025-05-19 22:11:23.365872 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-19 22:11:23.365891 | orchestrator | Monday 19 May 2025 22:09:06 +0000 (0:00:00.594) 0:00:31.953 ************ 2025-05-19 22:11:23.365900 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:23.365912 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.365923 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:23.365934 | orchestrator | 2025-05-19 22:11:23.365947 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-19 22:11:23.365956 | orchestrator | Monday 19 May 2025 22:09:10 +0000 (0:00:03.431) 0:00:35.384 ************ 2025-05-19 22:11:23.365977 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366001 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366074 | orchestrator | 2025-05-19 22:11:23.366085 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-19 22:11:23.366096 | orchestrator | Monday 19 May 2025 22:09:11 +0000 (0:00:01.353) 0:00:36.738 ************ 2025-05-19 22:11:23.366102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366109 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366117 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:11:23.366122 | orchestrator | 2025-05-19 22:11:23.366129 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-19 22:11:23.366135 | orchestrator | Monday 19 May 2025 22:09:12 +0000 (0:00:00.924) 0:00:37.662 ************ 2025-05-19 22:11:23.366142 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:11:23.366150 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:11:23.366157 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:11:23.366163 | orchestrator | 2025-05-19 22:11:23.366170 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-19 22:11:23.366176 | orchestrator | Monday 19 May 2025 22:09:13 +0000 (0:00:00.661) 0:00:38.324 ************ 2025-05-19 22:11:23.366185 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366198 | orchestrator | 2025-05-19 22:11:23.366208 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-19 22:11:23.366218 | orchestrator | Monday 19 May 2025 22:09:13 +0000 (0:00:00.129) 0:00:38.454 ************ 2025-05-19 22:11:23.366229 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366241 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366251 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366262 | orchestrator | 2025-05-19 22:11:23.366272 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 22:11:23.366284 | orchestrator | Monday 19 May 2025 22:09:13 +0000 (0:00:00.235) 0:00:38.689 ************ 2025-05-19 22:11:23.366294 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:11:23.366306 | orchestrator | 2025-05-19 22:11:23.366317 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-19 22:11:23.366326 | orchestrator | Monday 19 May 2025 22:09:13 +0000 (0:00:00.460) 0:00:39.149 ************ 2025-05-19 22:11:23.366358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366449 | orchestrator | 2025-05-19 22:11:23.366463 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-19 22:11:23.366470 | orchestrator | Monday 19 May 2025 22:09:19 +0000 (0:00:05.367) 0:00:44.516 ************ 2025-05-19 22:11:23.366489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366497 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366520 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366562 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366571 | orchestrator | 2025-05-19 22:11:23.366583 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-19 22:11:23.366589 | orchestrator | Monday 19 May 2025 22:09:22 +0000 (0:00:03.394) 0:00:47.911 ************ 2025-05-19 22:11:23.366596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366603 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366631 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-19 22:11:23.366645 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366652 | orchestrator | 2025-05-19 22:11:23.366659 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-19 22:11:23.366666 | orchestrator | Monday 19 May 2025 22:09:26 +0000 (0:00:03.444) 0:00:51.355 ************ 2025-05-19 22:11:23.366673 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366680 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366691 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366700 | orchestrator | 2025-05-19 22:11:23.366706 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-19 22:11:23.366713 | orchestrator | Monday 19 May 2025 22:09:29 +0000 (0:00:03.594) 0:00:54.949 ************ 2025-05-19 22:11:23.366727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.366840 | orchestrator | 2025-05-19 22:11:23.366851 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-19 22:11:23.366862 | orchestrator | Monday 19 May 2025 22:09:32 +0000 (0:00:03.227) 0:00:58.177 ************ 2025-05-19 22:11:23.366868 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.366873 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:23.366878 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:23.366883 | orchestrator | 2025-05-19 22:11:23.366888 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-19 22:11:23.366896 | orchestrator | Monday 19 May 2025 22:09:39 +0000 (0:00:06.160) 0:01:04.337 ************ 2025-05-19 22:11:23.366906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366915 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366924 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366931 | orchestrator | 2025-05-19 22:11:23.366936 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-19 22:11:23.366952 | orchestrator | Monday 192025-05-19 22:11:23 | INFO  | Task c6d57e22-8304-4ea5-9eb3-14fd4a4de565 is in state SUCCESS 2025-05-19 22:11:23.366959 | orchestrator | May 2025 22:09:43 +0000 (0:00:04.212) 0:01:08.550 ************ 2025-05-19 22:11:23.366965 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.366970 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.366976 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.366983 | orchestrator | 2025-05-19 22:11:23.366988 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-19 22:11:23.366994 | orchestrator | Monday 19 May 2025 22:09:48 +0000 (0:00:05.508) 0:01:14.059 ************ 2025-05-19 22:11:23.367000 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.367006 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.367011 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.367016 | orchestrator | 2025-05-19 22:11:23.367022 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-19 22:11:23.367029 | orchestrator | Monday 19 May 2025 22:09:54 +0000 (0:00:05.870) 0:01:19.929 ************ 2025-05-19 22:11:23.367034 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.367040 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.367045 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.367051 | orchestrator | 2025-05-19 22:11:23.367057 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-19 22:11:23.367062 | orchestrator | Monday 19 May 2025 22:09:59 +0000 (0:00:04.656) 0:01:24.586 ************ 2025-05-19 22:11:23.367068 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.367074 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.367079 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.367085 | orchestrator | 2025-05-19 22:11:23.367091 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-19 22:11:23.367097 | orchestrator | Monday 19 May 2025 22:09:59 +0000 (0:00:00.241) 0:01:24.828 ************ 2025-05-19 22:11:23.367103 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 22:11:23.367109 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.367122 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 22:11:23.367129 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.367135 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-19 22:11:23.367142 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.367147 | orchestrator | 2025-05-19 22:11:23.367153 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-19 22:11:23.367158 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:03.399) 0:01:28.227 ************ 2025-05-19 22:11:23.367166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.367184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.367197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-19 22:11:23.367204 | orchestrator | 2025-05-19 22:11:23.367210 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-19 22:11:23.367215 | orchestrator | Monday 19 May 2025 22:10:06 +0000 (0:00:03.366) 0:01:31.594 ************ 2025-05-19 22:11:23.367222 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:23.367228 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:23.367234 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:23.367239 | orchestrator | 2025-05-19 22:11:23.367245 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-19 22:11:23.367251 | orchestrator | Monday 19 May 2025 22:10:06 +0000 (0:00:00.268) 0:01:31.862 ************ 2025-05-19 22:11:23.367257 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367263 | orchestrator | 2025-05-19 22:11:23.367269 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-19 22:11:23.367277 | orchestrator | Monday 19 May 2025 22:10:08 +0000 (0:00:01.841) 0:01:33.704 ************ 2025-05-19 22:11:23.367283 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367289 | orchestrator | 2025-05-19 22:11:23.367295 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-19 22:11:23.367300 | orchestrator | Monday 19 May 2025 22:10:10 +0000 (0:00:02.000) 0:01:35.705 ************ 2025-05-19 22:11:23.367306 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367312 | orchestrator | 2025-05-19 22:11:23.367318 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-19 22:11:23.367323 | orchestrator | Monday 19 May 2025 22:10:12 +0000 (0:00:01.918) 0:01:37.623 ************ 2025-05-19 22:11:23.367329 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367335 | orchestrator | 2025-05-19 22:11:23.367341 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-19 22:11:23.367355 | orchestrator | Monday 19 May 2025 22:10:37 +0000 (0:00:24.964) 0:02:02.587 ************ 2025-05-19 22:11:23.367365 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367374 | orchestrator | 2025-05-19 22:11:23.367389 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 22:11:23.367399 | orchestrator | Monday 19 May 2025 22:10:39 +0000 (0:00:02.445) 0:02:05.032 ************ 2025-05-19 22:11:23.367407 | orchestrator | 2025-05-19 22:11:23.367422 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 22:11:23.367432 | orchestrator | Monday 19 May 2025 22:10:39 +0000 (0:00:00.063) 0:02:05.095 ************ 2025-05-19 22:11:23.367442 | orchestrator | 2025-05-19 22:11:23.367451 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-19 22:11:23.367461 | orchestrator | Monday 19 May 2025 22:10:39 +0000 (0:00:00.063) 0:02:05.158 ************ 2025-05-19 22:11:23.367470 | orchestrator | 2025-05-19 22:11:23.367480 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-19 22:11:23.367488 | orchestrator | Monday 19 May 2025 22:10:39 +0000 (0:00:00.066) 0:02:05.225 ************ 2025-05-19 22:11:23.367498 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:23.367509 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:23.367517 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:23.367526 | orchestrator | 2025-05-19 22:11:23.367535 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:11:23.367546 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 22:11:23.367557 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:11:23.367567 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:11:23.367577 | orchestrator | 2025-05-19 22:11:23.367586 | orchestrator | 2025-05-19 22:11:23.367596 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:11:23.367606 | orchestrator | Monday 19 May 2025 22:11:20 +0000 (0:00:40.111) 0:02:45.337 ************ 2025-05-19 22:11:23.367612 | orchestrator | =============================================================================== 2025-05-19 22:11:23.367617 | orchestrator | glance : Restart glance-api container ---------------------------------- 40.11s 2025-05-19 22:11:23.367623 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.96s 2025-05-19 22:11:23.367629 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.84s 2025-05-19 22:11:23.367634 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.16s 2025-05-19 22:11:23.367640 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.87s 2025-05-19 22:11:23.367646 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.51s 2025-05-19 22:11:23.367652 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.37s 2025-05-19 22:11:23.367657 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.17s 2025-05-19 22:11:23.367662 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.66s 2025-05-19 22:11:23.367668 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.21s 2025-05-19 22:11:23.367673 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.97s 2025-05-19 22:11:23.367679 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.84s 2025-05-19 22:11:23.367685 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.66s 2025-05-19 22:11:23.367691 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.59s 2025-05-19 22:11:23.367699 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.44s 2025-05-19 22:11:23.367709 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.43s 2025-05-19 22:11:23.367719 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.40s 2025-05-19 22:11:23.367727 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.39s 2025-05-19 22:11:23.367781 | orchestrator | glance : Check glance containers ---------------------------------------- 3.37s 2025-05-19 22:11:23.367799 | orchestrator | glance : Copying over config.json files for services -------------------- 3.23s 2025-05-19 22:11:23.367810 | orchestrator | 2025-05-19 22:11:23 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:23.367820 | orchestrator | 2025-05-19 22:11:23 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:23.368084 | orchestrator | 2025-05-19 22:11:23 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:23.368674 | orchestrator | 2025-05-19 22:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:26.427261 | orchestrator | 2025-05-19 22:11:26 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:26.429470 | orchestrator | 2025-05-19 22:11:26 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:26.429790 | orchestrator | 2025-05-19 22:11:26 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:26.431719 | orchestrator | 2025-05-19 22:11:26 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:26.431804 | orchestrator | 2025-05-19 22:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:29.488219 | orchestrator | 2025-05-19 22:11:29 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:29.488323 | orchestrator | 2025-05-19 22:11:29 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:29.490309 | orchestrator | 2025-05-19 22:11:29 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:29.491329 | orchestrator | 2025-05-19 22:11:29 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:29.491364 | orchestrator | 2025-05-19 22:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:32.553140 | orchestrator | 2025-05-19 22:11:32 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:32.553225 | orchestrator | 2025-05-19 22:11:32 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:32.554441 | orchestrator | 2025-05-19 22:11:32 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:32.555692 | orchestrator | 2025-05-19 22:11:32 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:32.555752 | orchestrator | 2025-05-19 22:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:35.610130 | orchestrator | 2025-05-19 22:11:35 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:35.610859 | orchestrator | 2025-05-19 22:11:35 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:35.612296 | orchestrator | 2025-05-19 22:11:35 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:35.616130 | orchestrator | 2025-05-19 22:11:35 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:35.616405 | orchestrator | 2025-05-19 22:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:38.690435 | orchestrator | 2025-05-19 22:11:38 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:38.691883 | orchestrator | 2025-05-19 22:11:38 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:38.696244 | orchestrator | 2025-05-19 22:11:38 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:38.699028 | orchestrator | 2025-05-19 22:11:38 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:38.700663 | orchestrator | 2025-05-19 22:11:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:41.748115 | orchestrator | 2025-05-19 22:11:41 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:41.748773 | orchestrator | 2025-05-19 22:11:41 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:41.751046 | orchestrator | 2025-05-19 22:11:41 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:41.752100 | orchestrator | 2025-05-19 22:11:41 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:41.752513 | orchestrator | 2025-05-19 22:11:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:44.802700 | orchestrator | 2025-05-19 22:11:44 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:44.804235 | orchestrator | 2025-05-19 22:11:44 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:44.806121 | orchestrator | 2025-05-19 22:11:44 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:44.808141 | orchestrator | 2025-05-19 22:11:44 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:44.808210 | orchestrator | 2025-05-19 22:11:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:47.853624 | orchestrator | 2025-05-19 22:11:47 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:47.854834 | orchestrator | 2025-05-19 22:11:47 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:47.856774 | orchestrator | 2025-05-19 22:11:47 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:47.858572 | orchestrator | 2025-05-19 22:11:47 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:47.858619 | orchestrator | 2025-05-19 22:11:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:50.912992 | orchestrator | 2025-05-19 22:11:50 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:50.915377 | orchestrator | 2025-05-19 22:11:50 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:50.917829 | orchestrator | 2025-05-19 22:11:50 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:50.920215 | orchestrator | 2025-05-19 22:11:50 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state STARTED 2025-05-19 22:11:50.920326 | orchestrator | 2025-05-19 22:11:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:53.978179 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:11:53.978585 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:53.980170 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:53.981813 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:53.983260 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:11:53.987260 | orchestrator | 2025-05-19 22:11:53 | INFO  | Task 22d3295b-99e9-4369-a565-d9d91ef05718 is in state SUCCESS 2025-05-19 22:11:53.989025 | orchestrator | 2025-05-19 22:11:53.989064 | orchestrator | 2025-05-19 22:11:53.989075 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:11:53.989112 | orchestrator | 2025-05-19 22:11:53.989123 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:11:53.989134 | orchestrator | Monday 19 May 2025 22:08:27 +0000 (0:00:00.302) 0:00:00.302 ************ 2025-05-19 22:11:53.989144 | orchestrator | ok: [testbed-manager] 2025-05-19 22:11:53.989155 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:11:53.989165 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:11:53.989175 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:11:53.989184 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:11:53.989194 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:11:53.989204 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:11:53.989213 | orchestrator | 2025-05-19 22:11:53.989223 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:11:53.989233 | orchestrator | Monday 19 May 2025 22:08:28 +0000 (0:00:00.947) 0:00:01.249 ************ 2025-05-19 22:11:53.989243 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989253 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989262 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989272 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989351 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989363 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989372 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-19 22:11:53.989382 | orchestrator | 2025-05-19 22:11:53.989391 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-19 22:11:53.989430 | orchestrator | 2025-05-19 22:11:53.989441 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 22:11:53.989451 | orchestrator | Monday 19 May 2025 22:08:29 +0000 (0:00:00.749) 0:00:01.999 ************ 2025-05-19 22:11:53.989489 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:11:53.989501 | orchestrator | 2025-05-19 22:11:53.989510 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-19 22:11:53.989592 | orchestrator | Monday 19 May 2025 22:08:30 +0000 (0:00:01.787) 0:00:03.786 ************ 2025-05-19 22:11:53.989608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.989636 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:11:53.989649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.989803 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.989832 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.989846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.989858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.989869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.989931 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.989952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.989969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990190 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:11:53.990208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990287 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990377 | orchestrator | 2025-05-19 22:11:53.990387 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-19 22:11:53.990398 | orchestrator | Monday 19 May 2025 22:08:34 +0000 (0:00:03.857) 0:00:07.644 ************ 2025-05-19 22:11:53.990408 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:11:53.990418 | orchestrator | 2025-05-19 22:11:53.990428 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-19 22:11:53.990438 | orchestrator | Monday 19 May 2025 22:08:36 +0000 (0:00:01.377) 0:00:09.021 ************ 2025-05-19 22:11:53.990541 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:11:53.990560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990729 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.990761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990824 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990855 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:11:53.990884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990921 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990952 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.990968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.990993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.991004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.991020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.991031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.991041 | orchestrator | 2025-05-19 22:11:53.991051 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-19 22:11:53.991061 | orchestrator | Monday 19 May 2025 22:08:41 +0000 (0:00:05.798) 0:00:14.819 ************ 2025-05-19 22:11:53.991087 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 22:11:53.991104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991118 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 22:11:53.991160 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991322 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.991338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991480 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.991490 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.991500 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.991510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991540 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.991550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991575 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991585 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.991595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991640 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.991650 | orchestrator | 2025-05-19 22:11:53.991660 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-19 22:11:53.991670 | orchestrator | Monday 19 May 2025 22:08:43 +0000 (0:00:01.495) 0:00:16.315 ************ 2025-05-19 22:11:53.991680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-19 22:11:53.991824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991843 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991862 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-19 22:11:53.991885 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991895 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.991905 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.991920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.991930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.991978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.991988 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.991998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.992008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.992043 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.992053 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992086 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.992096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.992106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.992116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.992126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-19 22:11:53.992146 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.992160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-19 22:11:53.992171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-19 22:11:53.992204 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.992213 | orchestrator | 2025-05-19 22:11:53.992223 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-19 22:11:53.992233 | orchestrator | Monday 19 May 2025 22:08:45 +0000 (0:00:02.031) 0:00:18.346 ************ 2025-05-19 22:11:53.992244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:11:53.992254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992289 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992356 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.992379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992442 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992459 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992480 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992516 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:11:53.992533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.992590 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.992642 | orchestrator | 2025-05-19 22:11:53.992651 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-19 22:11:53.992661 | orchestrator | Monday 19 May 2025 22:08:51 +0000 (0:00:05.941) 0:00:24.288 ************ 2025-05-19 22:11:53.992671 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 22:11:53.992681 | orchestrator | 2025-05-19 22:11:53.992708 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-19 22:11:53.992724 | orchestrator | Monday 19 May 2025 22:08:52 +0000 (0:00:01.378) 0:00:25.666 ************ 2025-05-19 22:11:53.992735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992746 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992756 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992766 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992787 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992798 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992813 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.992824 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992834 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1091030, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992844 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992855 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992889 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992904 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992915 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992935 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992953 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992968 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992978 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.992989 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993016 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993026 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1091017, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.993042 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993056 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993067 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993077 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993093 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993104 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993114 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993132 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993146 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993157 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993167 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090989, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6160915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.993183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993193 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993209 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993219 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993233 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993244 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993254 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993583 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993603 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993622 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993632 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993659 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993669 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993687 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993716 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090995, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6170917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.993733 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993743 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993753 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993768 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993779 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993796 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993807 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993824 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993834 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993845 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993862 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993872 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1091013, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6220915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.993888 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993906 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993916 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993926 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993950 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993978 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.993995 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994054 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994095 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994111 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994121 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090999, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994140 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994158 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994168 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994178 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994191 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994207 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994219 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994241 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994254 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994266 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1091010, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994277 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994289 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994305 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994340 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994351 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994361 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994372 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.994382 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994392 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994407 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994417 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1091019, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6240916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994438 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994449 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994469 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994479 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.994489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994503 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994519 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994534 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994545 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994555 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994565 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994589 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994609 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994624 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1091026, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6260917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994635 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994645 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.994655 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994665 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.994675 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994685 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.994725 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-19 22:11:53.994747 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.994757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1091047, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6300917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994767 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1091021, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6250918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994784 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090997, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6180916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994795 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1091008, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6210916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994805 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090986, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6150916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1091015, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6230917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994825 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1091045, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6290917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1091005, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6200917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1091032, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6270916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-19 22:11:53.994866 | orchestrator | 2025-05-19 22:11:53.994877 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-19 22:11:53.994887 | orchestrator | Monday 19 May 2025 22:09:14 +0000 (0:00:22.254) 0:00:47.921 ************ 2025-05-19 22:11:53.994902 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 22:11:53.994912 | orchestrator | 2025-05-19 22:11:53.994922 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-19 22:11:53.994932 | orchestrator | Monday 19 May 2025 22:09:16 +0000 (0:00:01.370) 0:00:49.292 ************ 2025-05-19 22:11:53.994942 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.994952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.994962 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.994972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.994982 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.994991 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995001 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995011 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995021 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995030 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995040 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995059 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995069 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995079 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995088 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995098 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995108 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995127 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995137 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995147 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995162 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995172 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995181 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995191 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995201 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995210 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995220 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995229 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995239 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.995249 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995258 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-19 22:11:53.995268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-19 22:11:53.995278 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-19 22:11:53.995288 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 22:11:53.995297 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:11:53.995307 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-19 22:11:53.995317 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-19 22:11:53.995326 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 22:11:53.995336 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 22:11:53.995346 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 22:11:53.995355 | orchestrator | 2025-05-19 22:11:53.995369 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-19 22:11:53.995379 | orchestrator | Monday 19 May 2025 22:09:18 +0000 (0:00:02.054) 0:00:51.346 ************ 2025-05-19 22:11:53.995389 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995400 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.995410 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995420 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.995429 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995439 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.995449 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995458 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.995468 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995478 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.995487 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-19 22:11:53.995497 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.995506 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-19 22:11:53.995516 | orchestrator | 2025-05-19 22:11:53.995526 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-19 22:11:53.995536 | orchestrator | Monday 19 May 2025 22:09:37 +0000 (0:00:18.910) 0:01:10.257 ************ 2025-05-19 22:11:53.995550 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995561 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995570 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.995580 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.995590 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995606 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.995616 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995626 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.995635 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995645 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.995654 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-19 22:11:53.995664 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.995776 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-19 22:11:53.995791 | orchestrator | 2025-05-19 22:11:53.995801 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-19 22:11:53.995810 | orchestrator | Monday 19 May 2025 22:09:40 +0000 (0:00:03.314) 0:01:13.571 ************ 2025-05-19 22:11:53.995820 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995831 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995840 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-19 22:11:53.995850 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.995860 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995870 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.995880 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.995889 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995899 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.995909 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995919 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.995928 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-19 22:11:53.995938 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.995948 | orchestrator | 2025-05-19 22:11:53.995957 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-19 22:11:53.995967 | orchestrator | Monday 19 May 2025 22:09:42 +0000 (0:00:02.169) 0:01:15.741 ************ 2025-05-19 22:11:53.995977 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 22:11:53.995986 | orchestrator | 2025-05-19 22:11:53.995996 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-19 22:11:53.996006 | orchestrator | Monday 19 May 2025 22:09:43 +0000 (0:00:00.569) 0:01:16.310 ************ 2025-05-19 22:11:53.996015 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.996025 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.996040 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.996050 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.996060 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996070 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996079 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996089 | orchestrator | 2025-05-19 22:11:53.996099 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-19 22:11:53.996108 | orchestrator | Monday 19 May 2025 22:09:44 +0000 (0:00:00.775) 0:01:17.086 ************ 2025-05-19 22:11:53.996118 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.996128 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996145 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996155 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996164 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.996174 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.996184 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.996193 | orchestrator | 2025-05-19 22:11:53.996203 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-19 22:11:53.996212 | orchestrator | Monday 19 May 2025 22:09:47 +0000 (0:00:03.008) 0:01:20.095 ************ 2025-05-19 22:11:53.996222 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996232 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996242 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996251 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996261 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996271 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.996280 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.996297 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.996307 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.996317 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996327 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996336 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996346 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-19 22:11:53.996356 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996365 | orchestrator | 2025-05-19 22:11:53.996375 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-19 22:11:53.996385 | orchestrator | Monday 19 May 2025 22:09:50 +0000 (0:00:03.172) 0:01:23.270 ************ 2025-05-19 22:11:53.996394 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996404 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.996414 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996423 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.996433 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996443 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.996452 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996462 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996472 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-19 22:11:53.996481 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996491 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996501 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-19 22:11:53.996510 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996520 | orchestrator | 2025-05-19 22:11:53.996529 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-19 22:11:53.996539 | orchestrator | Monday 19 May 2025 22:09:53 +0000 (0:00:02.780) 0:01:26.051 ************ 2025-05-19 22:11:53.996549 | orchestrator | [WARNING]: Skipped 2025-05-19 22:11:53.996558 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-19 22:11:53.996575 | orchestrator | due to this access issue: 2025-05-19 22:11:53.996585 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-19 22:11:53.996595 | orchestrator | not a directory 2025-05-19 22:11:53.996604 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-19 22:11:53.996614 | orchestrator | 2025-05-19 22:11:53.996624 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-19 22:11:53.996634 | orchestrator | Monday 19 May 2025 22:09:54 +0000 (0:00:01.615) 0:01:27.666 ************ 2025-05-19 22:11:53.996644 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.996653 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.996663 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.996673 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.996682 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996712 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996722 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996732 | orchestrator | 2025-05-19 22:11:53.996742 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-19 22:11:53.996751 | orchestrator | Monday 19 May 2025 22:09:55 +0000 (0:00:00.907) 0:01:28.573 ************ 2025-05-19 22:11:53.996761 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.996770 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:11:53.996785 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:11:53.996795 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:11:53.996804 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:11:53.996814 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:11:53.996824 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:11:53.996833 | orchestrator | 2025-05-19 22:11:53.996843 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-19 22:11:53.996853 | orchestrator | Monday 19 May 2025 22:09:56 +0000 (0:00:01.198) 0:01:29.771 ************ 2025-05-19 22:11:53.996863 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-19 22:11:53.996880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996919 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996945 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.996966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.996982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-19 22:11:53.996993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997003 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997086 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-19 22:11:53.997106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997177 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-19 22:11:53.997203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-19 22:11:53.997234 | orchestrator | 2025-05-19 22:11:53.997243 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-19 22:11:53.997253 | orchestrator | Monday 19 May 2025 22:10:00 +0000 (0:00:03.831) 0:01:33.603 ************ 2025-05-19 22:11:53.997263 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-19 22:11:53.997273 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:11:53.997283 | orchestrator | 2025-05-19 22:11:53.997292 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997306 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:01.628) 0:01:35.232 ************ 2025-05-19 22:11:53.997316 | orchestrator | 2025-05-19 22:11:53.997326 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997336 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.051) 0:01:35.284 ************ 2025-05-19 22:11:53.997345 | orchestrator | 2025-05-19 22:11:53.997355 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997365 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.050) 0:01:35.334 ************ 2025-05-19 22:11:53.997374 | orchestrator | 2025-05-19 22:11:53.997384 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997393 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.048) 0:01:35.382 ************ 2025-05-19 22:11:53.997403 | orchestrator | 2025-05-19 22:11:53.997412 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997422 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.171) 0:01:35.554 ************ 2025-05-19 22:11:53.997431 | orchestrator | 2025-05-19 22:11:53.997441 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997451 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.049) 0:01:35.603 ************ 2025-05-19 22:11:53.997460 | orchestrator | 2025-05-19 22:11:53.997479 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-19 22:11:53.997489 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.051) 0:01:35.654 ************ 2025-05-19 22:11:53.997498 | orchestrator | 2025-05-19 22:11:53.997508 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-19 22:11:53.997517 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:00.062) 0:01:35.717 ************ 2025-05-19 22:11:53.997527 | orchestrator | changed: [testbed-manager] 2025-05-19 22:11:53.997536 | orchestrator | 2025-05-19 22:11:53.997546 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-19 22:11:53.997561 | orchestrator | Monday 19 May 2025 22:10:25 +0000 (0:00:23.053) 0:01:58.771 ************ 2025-05-19 22:11:53.997571 | orchestrator | changed: [testbed-manager] 2025-05-19 22:11:53.997581 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:11:53.997591 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:11:53.997600 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.997610 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.997619 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.997629 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:11:53.997638 | orchestrator | 2025-05-19 22:11:53.997648 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-19 22:11:53.997658 | orchestrator | Monday 19 May 2025 22:10:40 +0000 (0:00:14.510) 0:02:13.281 ************ 2025-05-19 22:11:53.997667 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.997677 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.997687 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.997710 | orchestrator | 2025-05-19 22:11:53.997720 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-19 22:11:53.997730 | orchestrator | Monday 19 May 2025 22:10:51 +0000 (0:00:11.157) 0:02:24.438 ************ 2025-05-19 22:11:53.997739 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.997749 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.997758 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.997768 | orchestrator | 2025-05-19 22:11:53.997777 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-19 22:11:53.997787 | orchestrator | Monday 19 May 2025 22:11:02 +0000 (0:00:10.546) 0:02:34.985 ************ 2025-05-19 22:11:53.997797 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.997806 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.997816 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:11:53.997826 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.997835 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:11:53.997845 | orchestrator | changed: [testbed-manager] 2025-05-19 22:11:53.997854 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:11:53.997864 | orchestrator | 2025-05-19 22:11:53.997874 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-19 22:11:53.997883 | orchestrator | Monday 19 May 2025 22:11:20 +0000 (0:00:18.852) 0:02:53.837 ************ 2025-05-19 22:11:53.997893 | orchestrator | changed: [testbed-manager] 2025-05-19 22:11:53.997903 | orchestrator | 2025-05-19 22:11:53.997912 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-19 22:11:53.997922 | orchestrator | Monday 19 May 2025 22:11:30 +0000 (0:00:09.961) 0:03:03.799 ************ 2025-05-19 22:11:53.997932 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:11:53.997941 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:11:53.997951 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:11:53.997960 | orchestrator | 2025-05-19 22:11:53.997970 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-19 22:11:53.997980 | orchestrator | Monday 19 May 2025 22:11:36 +0000 (0:00:05.796) 0:03:09.595 ************ 2025-05-19 22:11:53.997990 | orchestrator | changed: [testbed-manager] 2025-05-19 22:11:53.997999 | orchestrator | 2025-05-19 22:11:53.998009 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-19 22:11:53.998054 | orchestrator | Monday 19 May 2025 22:11:41 +0000 (0:00:04.916) 0:03:14.511 ************ 2025-05-19 22:11:53.998066 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:11:53.998076 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:11:53.998085 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:11:53.998095 | orchestrator | 2025-05-19 22:11:53.998105 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:11:53.998115 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 22:11:53.998125 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:11:53.998139 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:11:53.998150 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:11:53.998159 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 22:11:53.998169 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 22:11:53.998179 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-19 22:11:53.998188 | orchestrator | 2025-05-19 22:11:53.998198 | orchestrator | 2025-05-19 22:11:53.998208 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:11:53.998218 | orchestrator | Monday 19 May 2025 22:11:52 +0000 (0:00:10.559) 0:03:25.071 ************ 2025-05-19 22:11:53.998228 | orchestrator | =============================================================================== 2025-05-19 22:11:53.998237 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 23.05s 2025-05-19 22:11:53.998247 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.25s 2025-05-19 22:11:53.998257 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.91s 2025-05-19 22:11:53.998266 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 18.85s 2025-05-19 22:11:53.998283 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.51s 2025-05-19 22:11:53.998293 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.16s 2025-05-19 22:11:53.998303 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.56s 2025-05-19 22:11:53.998312 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.55s 2025-05-19 22:11:53.998322 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.96s 2025-05-19 22:11:53.998331 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.94s 2025-05-19 22:11:53.998341 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.80s 2025-05-19 22:11:53.998351 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.80s 2025-05-19 22:11:53.998361 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.92s 2025-05-19 22:11:53.998370 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.86s 2025-05-19 22:11:53.998380 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.83s 2025-05-19 22:11:53.998390 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.31s 2025-05-19 22:11:53.998400 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.17s 2025-05-19 22:11:53.998409 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.01s 2025-05-19 22:11:53.998426 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.78s 2025-05-19 22:11:53.998436 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.17s 2025-05-19 22:11:53.998446 | orchestrator | 2025-05-19 22:11:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:11:57.047099 | orchestrator | 2025-05-19 22:11:57 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:11:57.050205 | orchestrator | 2025-05-19 22:11:57 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:11:57.050269 | orchestrator | 2025-05-19 22:11:57 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:11:57.050277 | orchestrator | 2025-05-19 22:11:57 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:11:57.052951 | orchestrator | 2025-05-19 22:11:57 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:11:57.052978 | orchestrator | 2025-05-19 22:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:00.105437 | orchestrator | 2025-05-19 22:12:00 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:00.108957 | orchestrator | 2025-05-19 22:12:00 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:00.110181 | orchestrator | 2025-05-19 22:12:00 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:00.115863 | orchestrator | 2025-05-19 22:12:00 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:00.115907 | orchestrator | 2025-05-19 22:12:00 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:12:00.115921 | orchestrator | 2025-05-19 22:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:03.163269 | orchestrator | 2025-05-19 22:12:03 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:03.164250 | orchestrator | 2025-05-19 22:12:03 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:03.165408 | orchestrator | 2025-05-19 22:12:03 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:03.167914 | orchestrator | 2025-05-19 22:12:03 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:03.172097 | orchestrator | 2025-05-19 22:12:03 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:12:03.172733 | orchestrator | 2025-05-19 22:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:06.212500 | orchestrator | 2025-05-19 22:12:06 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:06.212880 | orchestrator | 2025-05-19 22:12:06 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:06.213293 | orchestrator | 2025-05-19 22:12:06 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:06.213989 | orchestrator | 2025-05-19 22:12:06 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:06.215432 | orchestrator | 2025-05-19 22:12:06 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:12:06.215440 | orchestrator | 2025-05-19 22:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:09.248153 | orchestrator | 2025-05-19 22:12:09 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:09.249052 | orchestrator | 2025-05-19 22:12:09 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:09.250141 | orchestrator | 2025-05-19 22:12:09 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:09.250170 | orchestrator | 2025-05-19 22:12:09 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:09.250855 | orchestrator | 2025-05-19 22:12:09 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state STARTED 2025-05-19 22:12:09.250877 | orchestrator | 2025-05-19 22:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:12.283648 | orchestrator | 2025-05-19 22:12:12 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:12.285182 | orchestrator | 2025-05-19 22:12:12 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:12.288071 | orchestrator | 2025-05-19 22:12:12 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:12.288812 | orchestrator | 2025-05-19 22:12:12 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:12.289188 | orchestrator | 2025-05-19 22:12:12 | INFO  | Task 2a9d937f-4de5-4202-95ea-cc480ea115da is in state SUCCESS 2025-05-19 22:12:12.289208 | orchestrator | 2025-05-19 22:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:15.328655 | orchestrator | 2025-05-19 22:12:15 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:15.329114 | orchestrator | 2025-05-19 22:12:15 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:15.329959 | orchestrator | 2025-05-19 22:12:15 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:15.330331 | orchestrator | 2025-05-19 22:12:15 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:15.330592 | orchestrator | 2025-05-19 22:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:18.370144 | orchestrator | 2025-05-19 22:12:18 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:18.373315 | orchestrator | 2025-05-19 22:12:18 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:18.375842 | orchestrator | 2025-05-19 22:12:18 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:18.376277 | orchestrator | 2025-05-19 22:12:18 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:18.376386 | orchestrator | 2025-05-19 22:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:21.407865 | orchestrator | 2025-05-19 22:12:21 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:21.408155 | orchestrator | 2025-05-19 22:12:21 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:21.408721 | orchestrator | 2025-05-19 22:12:21 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:21.409293 | orchestrator | 2025-05-19 22:12:21 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:21.409321 | orchestrator | 2025-05-19 22:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:24.462777 | orchestrator | 2025-05-19 22:12:24 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:24.464877 | orchestrator | 2025-05-19 22:12:24 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:24.467745 | orchestrator | 2025-05-19 22:12:24 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:24.470087 | orchestrator | 2025-05-19 22:12:24 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:24.470148 | orchestrator | 2025-05-19 22:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:27.509114 | orchestrator | 2025-05-19 22:12:27 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:27.509470 | orchestrator | 2025-05-19 22:12:27 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:27.510629 | orchestrator | 2025-05-19 22:12:27 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:27.510882 | orchestrator | 2025-05-19 22:12:27 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:27.511036 | orchestrator | 2025-05-19 22:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:30.544091 | orchestrator | 2025-05-19 22:12:30 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:30.545250 | orchestrator | 2025-05-19 22:12:30 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:30.546254 | orchestrator | 2025-05-19 22:12:30 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:30.547533 | orchestrator | 2025-05-19 22:12:30 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:30.547559 | orchestrator | 2025-05-19 22:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:33.587924 | orchestrator | 2025-05-19 22:12:33 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:33.588453 | orchestrator | 2025-05-19 22:12:33 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:33.588921 | orchestrator | 2025-05-19 22:12:33 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:33.589717 | orchestrator | 2025-05-19 22:12:33 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:33.589828 | orchestrator | 2025-05-19 22:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:36.624098 | orchestrator | 2025-05-19 22:12:36 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:36.624466 | orchestrator | 2025-05-19 22:12:36 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:36.625175 | orchestrator | 2025-05-19 22:12:36 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:36.625950 | orchestrator | 2025-05-19 22:12:36 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:36.628495 | orchestrator | 2025-05-19 22:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:39.663058 | orchestrator | 2025-05-19 22:12:39 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:39.663704 | orchestrator | 2025-05-19 22:12:39 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:39.664380 | orchestrator | 2025-05-19 22:12:39 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:39.665607 | orchestrator | 2025-05-19 22:12:39 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:39.665733 | orchestrator | 2025-05-19 22:12:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:42.714907 | orchestrator | 2025-05-19 22:12:42 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:42.715007 | orchestrator | 2025-05-19 22:12:42 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:42.715891 | orchestrator | 2025-05-19 22:12:42 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:42.717111 | orchestrator | 2025-05-19 22:12:42 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:42.717350 | orchestrator | 2025-05-19 22:12:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:45.778891 | orchestrator | 2025-05-19 22:12:45 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:45.781713 | orchestrator | 2025-05-19 22:12:45 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:45.782500 | orchestrator | 2025-05-19 22:12:45 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:45.782853 | orchestrator | 2025-05-19 22:12:45 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:45.782965 | orchestrator | 2025-05-19 22:12:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:48.819867 | orchestrator | 2025-05-19 22:12:48 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:48.821684 | orchestrator | 2025-05-19 22:12:48 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:48.822816 | orchestrator | 2025-05-19 22:12:48 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:48.823716 | orchestrator | 2025-05-19 22:12:48 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state STARTED 2025-05-19 22:12:48.823747 | orchestrator | 2025-05-19 22:12:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:51.874437 | orchestrator | 2025-05-19 22:12:51 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:51.875349 | orchestrator | 2025-05-19 22:12:51 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:51.875388 | orchestrator | 2025-05-19 22:12:51 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:51.876042 | orchestrator | 2025-05-19 22:12:51 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:12:51.877633 | orchestrator | 2025-05-19 22:12:51 | INFO  | Task 2dddbc78-a86d-4cd0-85c0-6b5bc7cdb867 is in state SUCCESS 2025-05-19 22:12:51.878818 | orchestrator | 2025-05-19 22:12:51.878852 | orchestrator | None 2025-05-19 22:12:51.878897 | orchestrator | 2025-05-19 22:12:51.878912 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:12:51.878923 | orchestrator | 2025-05-19 22:12:51.878934 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:12:51.878946 | orchestrator | Monday 19 May 2025 22:08:47 +0000 (0:00:00.451) 0:00:00.451 ************ 2025-05-19 22:12:51.878957 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:12:51.878969 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:12:51.878980 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:12:51.878990 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:12:51.879001 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:12:51.879012 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:12:51.879022 | orchestrator | 2025-05-19 22:12:51.879033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:12:51.879044 | orchestrator | Monday 19 May 2025 22:08:48 +0000 (0:00:00.870) 0:00:01.321 ************ 2025-05-19 22:12:51.879055 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-19 22:12:51.879066 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-19 22:12:51.879077 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-19 22:12:51.879088 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-19 22:12:51.879226 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-19 22:12:51.879240 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-19 22:12:51.879251 | orchestrator | 2025-05-19 22:12:51.879311 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-19 22:12:51.879322 | orchestrator | 2025-05-19 22:12:51.879333 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 22:12:51.879344 | orchestrator | Monday 19 May 2025 22:08:48 +0000 (0:00:00.780) 0:00:02.101 ************ 2025-05-19 22:12:51.879355 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:12:51.879367 | orchestrator | 2025-05-19 22:12:51.879378 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-19 22:12:51.879389 | orchestrator | Monday 19 May 2025 22:08:50 +0000 (0:00:01.251) 0:00:03.353 ************ 2025-05-19 22:12:51.879401 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-19 22:12:51.879412 | orchestrator | 2025-05-19 22:12:51.879424 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-19 22:12:51.879437 | orchestrator | Monday 19 May 2025 22:08:53 +0000 (0:00:02.841) 0:00:06.195 ************ 2025-05-19 22:12:51.879450 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-19 22:12:51.879463 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-19 22:12:51.879476 | orchestrator | 2025-05-19 22:12:51.879501 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-19 22:12:51.879514 | orchestrator | Monday 19 May 2025 22:08:58 +0000 (0:00:05.888) 0:00:12.083 ************ 2025-05-19 22:12:51.879526 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:12:51.879539 | orchestrator | 2025-05-19 22:12:51.879552 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-19 22:12:51.879565 | orchestrator | Monday 19 May 2025 22:09:01 +0000 (0:00:02.826) 0:00:14.910 ************ 2025-05-19 22:12:51.879577 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:12:51.879589 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-19 22:12:51.879602 | orchestrator | 2025-05-19 22:12:51.879639 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-19 22:12:51.879651 | orchestrator | Monday 19 May 2025 22:09:05 +0000 (0:00:03.685) 0:00:18.595 ************ 2025-05-19 22:12:51.879664 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:12:51.879676 | orchestrator | 2025-05-19 22:12:51.879688 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-19 22:12:51.879701 | orchestrator | Monday 19 May 2025 22:09:08 +0000 (0:00:02.921) 0:00:21.517 ************ 2025-05-19 22:12:51.879713 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-19 22:12:51.879726 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-19 22:12:51.879738 | orchestrator | 2025-05-19 22:12:51.879749 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-19 22:12:51.879760 | orchestrator | Monday 19 May 2025 22:09:15 +0000 (0:00:06.784) 0:00:28.302 ************ 2025-05-19 22:12:51.879790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.879814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.879826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.879843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879886 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.879976 | orchestrator | 2025-05-19 22:12:51.879993 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 22:12:51.880005 | orchestrator | Monday 19 May 2025 22:09:17 +0000 (0:00:02.781) 0:00:31.083 ************ 2025-05-19 22:12:51.880016 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.880027 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.880038 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.880049 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.880060 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.880071 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.880082 | orchestrator | 2025-05-19 22:12:51.880093 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 22:12:51.880105 | orchestrator | Monday 19 May 2025 22:09:18 +0000 (0:00:00.470) 0:00:31.553 ************ 2025-05-19 22:12:51.880116 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.880127 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.880138 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.880149 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:12:51.880160 | orchestrator | 2025-05-19 22:12:51.880171 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-19 22:12:51.880182 | orchestrator | Monday 19 May 2025 22:09:19 +0000 (0:00:01.308) 0:00:32.862 ************ 2025-05-19 22:12:51.880193 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-19 22:12:51.880204 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-19 22:12:51.880215 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-19 22:12:51.880226 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-19 22:12:51.880237 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-19 22:12:51.880248 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-19 22:12:51.880259 | orchestrator | 2025-05-19 22:12:51.880270 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-19 22:12:51.880281 | orchestrator | Monday 19 May 2025 22:09:21 +0000 (0:00:01.855) 0:00:34.717 ************ 2025-05-19 22:12:51.880297 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880310 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880328 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880346 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880359 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880375 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-19 22:12:51.880387 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880405 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880424 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880436 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880453 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880465 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-19 22:12:51.880482 | orchestrator | 2025-05-19 22:12:51.880494 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-19 22:12:51.880505 | orchestrator | Monday 19 May 2025 22:09:25 +0000 (0:00:03.932) 0:00:38.649 ************ 2025-05-19 22:12:51.880516 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:12:51.880528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:12:51.880539 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-19 22:12:51.880550 | orchestrator | 2025-05-19 22:12:51.880561 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-19 22:12:51.880573 | orchestrator | Monday 19 May 2025 22:09:27 +0000 (0:00:02.128) 0:00:40.778 ************ 2025-05-19 22:12:51.880584 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-19 22:12:51.880595 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-19 22:12:51.880629 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-19 22:12:51.880643 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:12:51.880654 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:12:51.880671 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-19 22:12:51.880682 | orchestrator | 2025-05-19 22:12:51.880693 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-19 22:12:51.880704 | orchestrator | Monday 19 May 2025 22:09:30 +0000 (0:00:03.001) 0:00:43.779 ************ 2025-05-19 22:12:51.880715 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-19 22:12:51.880726 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-19 22:12:51.880737 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-19 22:12:51.880748 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-19 22:12:51.880759 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-19 22:12:51.880769 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-19 22:12:51.880780 | orchestrator | 2025-05-19 22:12:51.880791 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-19 22:12:51.880802 | orchestrator | Monday 19 May 2025 22:09:31 +0000 (0:00:01.109) 0:00:44.888 ************ 2025-05-19 22:12:51.880813 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.880824 | orchestrator | 2025-05-19 22:12:51.880835 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-19 22:12:51.880845 | orchestrator | Monday 19 May 2025 22:09:31 +0000 (0:00:00.181) 0:00:45.069 ************ 2025-05-19 22:12:51.880856 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.880867 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.880878 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.880889 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.880900 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.880910 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.880921 | orchestrator | 2025-05-19 22:12:51.880932 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 22:12:51.880949 | orchestrator | Monday 19 May 2025 22:09:32 +0000 (0:00:00.557) 0:00:45.627 ************ 2025-05-19 22:12:51.880962 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:12:51.880974 | orchestrator | 2025-05-19 22:12:51.880985 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-19 22:12:51.880996 | orchestrator | Monday 19 May 2025 22:09:33 +0000 (0:00:00.952) 0:00:46.579 ************ 2025-05-19 22:12:51.881017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.881030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.881048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.881068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881126 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.881737 | orchestrator | 2025-05-19 22:12:51.881748 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-19 22:12:51.881759 | orchestrator | Monday 19 May 2025 22:09:36 +0000 (0:00:02.963) 0:00:49.543 ************ 2025-05-19 22:12:51.881771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.881789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.881820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.881849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.881860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881871 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.881882 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.881894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881911 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881929 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.881941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881968 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.881980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.881991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882002 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.882057 | orchestrator | 2025-05-19 22:12:51.882072 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-19 22:12:51.882084 | orchestrator | Monday 19 May 2025 22:09:38 +0000 (0:00:02.061) 0:00:51.604 ************ 2025-05-19 22:12:51.882103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.882122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882133 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.882149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.882161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882173 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.882184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.882202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882220 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.882233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882259 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.882276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882302 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.882322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.882355 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.882368 | orchestrator | 2025-05-19 22:12:51.882380 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-19 22:12:51.882392 | orchestrator | Monday 19 May 2025 22:09:40 +0000 (0:00:01.874) 0:00:53.479 ************ 2025-05-19 22:12:51.882406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882494 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882510 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882556 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882591 | orchestrator | 2025-05-19 22:12:51.882625 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-19 22:12:51.882638 | orchestrator | Monday 19 May 2025 22:09:43 +0000 (0:00:03.364) 0:00:56.843 ************ 2025-05-19 22:12:51.882649 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 22:12:51.882660 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 22:12:51.882671 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.882682 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 22:12:51.882693 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 22:12:51.882704 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.882715 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-19 22:12:51.882725 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.882743 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-19 22:12:51.882754 | orchestrator | 2025-05-19 22:12:51.882765 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-19 22:12:51.882775 | orchestrator | Monday 19 May 2025 22:09:46 +0000 (0:00:02.513) 0:00:59.357 ************ 2025-05-19 22:12:51.882787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.882880 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882948 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.882959 | orchestrator | 2025-05-19 22:12:51.882970 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-19 22:12:51.882981 | orchestrator | Monday 19 May 2025 22:09:55 +0000 (0:00:09.528) 0:01:08.886 ************ 2025-05-19 22:12:51.882997 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.883009 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.883020 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.883031 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:12:51.883042 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:12:51.883052 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:12:51.883063 | orchestrator | 2025-05-19 22:12:51.883074 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-19 22:12:51.883085 | orchestrator | Monday 19 May 2025 22:09:58 +0000 (0:00:02.506) 0:01:11.392 ************ 2025-05-19 22:12:51.883096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.883108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883119 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.883144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.883156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883168 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.883184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-19 22:12:51.883197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883208 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.883219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883254 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.883266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883289 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.883306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-19 22:12:51.883330 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.883347 | orchestrator | 2025-05-19 22:12:51.883358 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-19 22:12:51.883369 | orchestrator | Monday 19 May 2025 22:09:59 +0000 (0:00:01.242) 0:01:12.635 ************ 2025-05-19 22:12:51.883380 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.883391 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.883402 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.883413 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.883423 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.883434 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.883445 | orchestrator | 2025-05-19 22:12:51.883455 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-19 22:12:51.883470 | orchestrator | Monday 19 May 2025 22:10:00 +0000 (0:00:00.732) 0:01:13.367 ************ 2025-05-19 22:12:51.883482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.883493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.883548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-19 22:12:51.883560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883650 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-19 22:12:51.883672 | orchestrator | 2025-05-19 22:12:51.883683 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-19 22:12:51.883694 | orchestrator | Monday 19 May 2025 22:10:02 +0000 (0:00:02.493) 0:01:15.861 ************ 2025-05-19 22:12:51.883705 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.883716 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:12:51.883728 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:12:51.883738 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:12:51.883749 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:12:51.883760 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:12:51.883771 | orchestrator | 2025-05-19 22:12:51.883781 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-19 22:12:51.883792 | orchestrator | Monday 19 May 2025 22:10:03 +0000 (0:00:00.528) 0:01:16.389 ************ 2025-05-19 22:12:51.883803 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:12:51.883814 | orchestrator | 2025-05-19 22:12:51.883825 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-19 22:12:51.883836 | orchestrator | Monday 19 May 2025 22:10:04 +0000 (0:00:01.729) 0:01:18.118 ************ 2025-05-19 22:12:51.883847 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:12:51.883857 | orchestrator | 2025-05-19 22:12:51.883868 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-19 22:12:51.883879 | orchestrator | Monday 19 May 2025 22:10:06 +0000 (0:00:01.835) 0:01:19.954 ************ 2025-05-19 22:12:51.883890 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:12:51.883901 | orchestrator | 2025-05-19 22:12:51.883912 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.883923 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:17.245) 0:01:37.200 ************ 2025-05-19 22:12:51.883934 | orchestrator | 2025-05-19 22:12:51.883950 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.883968 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.064) 0:01:37.264 ************ 2025-05-19 22:12:51.883979 | orchestrator | 2025-05-19 22:12:51.883990 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.884001 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.063) 0:01:37.327 ************ 2025-05-19 22:12:51.884011 | orchestrator | 2025-05-19 22:12:51.884022 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.884033 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.061) 0:01:37.389 ************ 2025-05-19 22:12:51.884044 | orchestrator | 2025-05-19 22:12:51.884055 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.884065 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.064) 0:01:37.454 ************ 2025-05-19 22:12:51.884076 | orchestrator | 2025-05-19 22:12:51.884087 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-19 22:12:51.884098 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.058) 0:01:37.512 ************ 2025-05-19 22:12:51.884108 | orchestrator | 2025-05-19 22:12:51.884119 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-19 22:12:51.884130 | orchestrator | Monday 19 May 2025 22:10:24 +0000 (0:00:00.063) 0:01:37.576 ************ 2025-05-19 22:12:51.884141 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:12:51.884152 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:12:51.884163 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:12:51.884173 | orchestrator | 2025-05-19 22:12:51.884184 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-19 22:12:51.884195 | orchestrator | Monday 19 May 2025 22:10:46 +0000 (0:00:22.317) 0:01:59.893 ************ 2025-05-19 22:12:51.884206 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:12:51.884217 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:12:51.884228 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:12:51.884238 | orchestrator | 2025-05-19 22:12:51.884249 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-19 22:12:51.884260 | orchestrator | Monday 19 May 2025 22:10:55 +0000 (0:00:08.345) 0:02:08.239 ************ 2025-05-19 22:12:51.884271 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:12:51.884282 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:12:51.884293 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:12:51.884303 | orchestrator | 2025-05-19 22:12:51.884314 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-19 22:12:51.884325 | orchestrator | Monday 19 May 2025 22:12:33 +0000 (0:01:38.846) 0:03:47.085 ************ 2025-05-19 22:12:51.884336 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:12:51.884347 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:12:51.884357 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:12:51.884368 | orchestrator | 2025-05-19 22:12:51.884379 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-19 22:12:51.884395 | orchestrator | Monday 19 May 2025 22:12:46 +0000 (0:00:12.908) 0:03:59.994 ************ 2025-05-19 22:12:51.884406 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:12:51.884417 | orchestrator | 2025-05-19 22:12:51.884428 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:12:51.884439 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-19 22:12:51.884451 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 22:12:51.884462 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 22:12:51.884473 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 22:12:51.884489 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 22:12:51.884500 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-19 22:12:51.884511 | orchestrator | 2025-05-19 22:12:51.884522 | orchestrator | 2025-05-19 22:12:51.884533 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:12:51.884544 | orchestrator | Monday 19 May 2025 22:12:48 +0000 (0:00:01.454) 0:04:01.448 ************ 2025-05-19 22:12:51.884555 | orchestrator | =============================================================================== 2025-05-19 22:12:51.884566 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 98.85s 2025-05-19 22:12:51.884577 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.32s 2025-05-19 22:12:51.884587 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.25s 2025-05-19 22:12:51.884598 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 12.91s 2025-05-19 22:12:51.884655 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.53s 2025-05-19 22:12:51.884667 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.35s 2025-05-19 22:12:51.884682 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.78s 2025-05-19 22:12:51.884693 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.89s 2025-05-19 22:12:51.884747 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.93s 2025-05-19 22:12:51.884760 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.69s 2025-05-19 22:12:51.884771 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.36s 2025-05-19 22:12:51.884782 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.00s 2025-05-19 22:12:51.884793 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.96s 2025-05-19 22:12:51.884803 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.92s 2025-05-19 22:12:51.884814 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.84s 2025-05-19 22:12:51.884825 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.83s 2025-05-19 22:12:51.884835 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.78s 2025-05-19 22:12:51.884846 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.51s 2025-05-19 22:12:51.884857 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.51s 2025-05-19 22:12:51.884868 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.49s 2025-05-19 22:12:51.884879 | orchestrator | 2025-05-19 22:12:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:54.923548 | orchestrator | 2025-05-19 22:12:54 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:54.923703 | orchestrator | 2025-05-19 22:12:54 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:54.923721 | orchestrator | 2025-05-19 22:12:54 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:54.924349 | orchestrator | 2025-05-19 22:12:54 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:12:54.924373 | orchestrator | 2025-05-19 22:12:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:12:57.954880 | orchestrator | 2025-05-19 22:12:57 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:12:57.955320 | orchestrator | 2025-05-19 22:12:57 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:12:57.955994 | orchestrator | 2025-05-19 22:12:57 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:12:57.956813 | orchestrator | 2025-05-19 22:12:57 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:12:57.956841 | orchestrator | 2025-05-19 22:12:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:01.004859 | orchestrator | 2025-05-19 22:13:01 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:01.004956 | orchestrator | 2025-05-19 22:13:01 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:01.004972 | orchestrator | 2025-05-19 22:13:01 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:01.004984 | orchestrator | 2025-05-19 22:13:01 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:01.004995 | orchestrator | 2025-05-19 22:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:04.050579 | orchestrator | 2025-05-19 22:13:04 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:04.050717 | orchestrator | 2025-05-19 22:13:04 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:04.051244 | orchestrator | 2025-05-19 22:13:04 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:04.052441 | orchestrator | 2025-05-19 22:13:04 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:04.052470 | orchestrator | 2025-05-19 22:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:07.076983 | orchestrator | 2025-05-19 22:13:07 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:07.078712 | orchestrator | 2025-05-19 22:13:07 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:07.078751 | orchestrator | 2025-05-19 22:13:07 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:07.078764 | orchestrator | 2025-05-19 22:13:07 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:07.078775 | orchestrator | 2025-05-19 22:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:10.105489 | orchestrator | 2025-05-19 22:13:10 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:10.105717 | orchestrator | 2025-05-19 22:13:10 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:10.107283 | orchestrator | 2025-05-19 22:13:10 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:10.107304 | orchestrator | 2025-05-19 22:13:10 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:10.107464 | orchestrator | 2025-05-19 22:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:13.139567 | orchestrator | 2025-05-19 22:13:13 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:13.140101 | orchestrator | 2025-05-19 22:13:13 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:13.140922 | orchestrator | 2025-05-19 22:13:13 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:13.141624 | orchestrator | 2025-05-19 22:13:13 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:13.141652 | orchestrator | 2025-05-19 22:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:16.226473 | orchestrator | 2025-05-19 22:13:16 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:16.226802 | orchestrator | 2025-05-19 22:13:16 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:16.227437 | orchestrator | 2025-05-19 22:13:16 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:16.228357 | orchestrator | 2025-05-19 22:13:16 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:16.228391 | orchestrator | 2025-05-19 22:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:19.267210 | orchestrator | 2025-05-19 22:13:19 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:19.267764 | orchestrator | 2025-05-19 22:13:19 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:19.269870 | orchestrator | 2025-05-19 22:13:19 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:19.273496 | orchestrator | 2025-05-19 22:13:19 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:19.273628 | orchestrator | 2025-05-19 22:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:22.311421 | orchestrator | 2025-05-19 22:13:22 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:22.313050 | orchestrator | 2025-05-19 22:13:22 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:22.315426 | orchestrator | 2025-05-19 22:13:22 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:22.316911 | orchestrator | 2025-05-19 22:13:22 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:22.316958 | orchestrator | 2025-05-19 22:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:25.404500 | orchestrator | 2025-05-19 22:13:25 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:25.405048 | orchestrator | 2025-05-19 22:13:25 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:25.410948 | orchestrator | 2025-05-19 22:13:25 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:25.411039 | orchestrator | 2025-05-19 22:13:25 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:25.411057 | orchestrator | 2025-05-19 22:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:28.455322 | orchestrator | 2025-05-19 22:13:28 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:28.455853 | orchestrator | 2025-05-19 22:13:28 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:28.456772 | orchestrator | 2025-05-19 22:13:28 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:28.457615 | orchestrator | 2025-05-19 22:13:28 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:28.457651 | orchestrator | 2025-05-19 22:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:31.504962 | orchestrator | 2025-05-19 22:13:31 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:31.505152 | orchestrator | 2025-05-19 22:13:31 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:31.506140 | orchestrator | 2025-05-19 22:13:31 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:31.506483 | orchestrator | 2025-05-19 22:13:31 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:31.506547 | orchestrator | 2025-05-19 22:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:34.550285 | orchestrator | 2025-05-19 22:13:34 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:34.550751 | orchestrator | 2025-05-19 22:13:34 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:34.553208 | orchestrator | 2025-05-19 22:13:34 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:34.553650 | orchestrator | 2025-05-19 22:13:34 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:34.553672 | orchestrator | 2025-05-19 22:13:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:37.598748 | orchestrator | 2025-05-19 22:13:37 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:37.598888 | orchestrator | 2025-05-19 22:13:37 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:37.598977 | orchestrator | 2025-05-19 22:13:37 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:37.600350 | orchestrator | 2025-05-19 22:13:37 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:37.600416 | orchestrator | 2025-05-19 22:13:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:40.637778 | orchestrator | 2025-05-19 22:13:40 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:40.638310 | orchestrator | 2025-05-19 22:13:40 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:40.639111 | orchestrator | 2025-05-19 22:13:40 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:40.640029 | orchestrator | 2025-05-19 22:13:40 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:40.640055 | orchestrator | 2025-05-19 22:13:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:43.679000 | orchestrator | 2025-05-19 22:13:43 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:43.679435 | orchestrator | 2025-05-19 22:13:43 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:43.680263 | orchestrator | 2025-05-19 22:13:43 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:43.681106 | orchestrator | 2025-05-19 22:13:43 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:43.682450 | orchestrator | 2025-05-19 22:13:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:46.718087 | orchestrator | 2025-05-19 22:13:46 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:46.724311 | orchestrator | 2025-05-19 22:13:46 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:46.728445 | orchestrator | 2025-05-19 22:13:46 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:46.729140 | orchestrator | 2025-05-19 22:13:46 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:46.729182 | orchestrator | 2025-05-19 22:13:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:49.764359 | orchestrator | 2025-05-19 22:13:49 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:49.765730 | orchestrator | 2025-05-19 22:13:49 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:49.766109 | orchestrator | 2025-05-19 22:13:49 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:49.766756 | orchestrator | 2025-05-19 22:13:49 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:49.766774 | orchestrator | 2025-05-19 22:13:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:52.819623 | orchestrator | 2025-05-19 22:13:52 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:52.820121 | orchestrator | 2025-05-19 22:13:52 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:52.821418 | orchestrator | 2025-05-19 22:13:52 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:52.821947 | orchestrator | 2025-05-19 22:13:52 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:52.821975 | orchestrator | 2025-05-19 22:13:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:55.852753 | orchestrator | 2025-05-19 22:13:55 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:55.852867 | orchestrator | 2025-05-19 22:13:55 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:55.853208 | orchestrator | 2025-05-19 22:13:55 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:55.853814 | orchestrator | 2025-05-19 22:13:55 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:55.854969 | orchestrator | 2025-05-19 22:13:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:13:58.877824 | orchestrator | 2025-05-19 22:13:58 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state STARTED 2025-05-19 22:13:58.877940 | orchestrator | 2025-05-19 22:13:58 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:13:58.879314 | orchestrator | 2025-05-19 22:13:58 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:13:58.879798 | orchestrator | 2025-05-19 22:13:58 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:13:58.879883 | orchestrator | 2025-05-19 22:13:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:01.919910 | orchestrator | 2025-05-19 22:14:01 | INFO  | Task f505cd21-ecd3-4179-90aa-527608ad9d22 is in state SUCCESS 2025-05-19 22:14:01.920008 | orchestrator | 2025-05-19 22:14:01 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:01.921144 | orchestrator | 2025-05-19 22:14:01.921182 | orchestrator | 2025-05-19 22:14:01.921302 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:14:01.921315 | orchestrator | 2025-05-19 22:14:01.921327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:14:01.921338 | orchestrator | Monday 19 May 2025 22:11:56 +0000 (0:00:00.252) 0:00:00.252 ************ 2025-05-19 22:14:01.921350 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:14:01.921361 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:14:01.921373 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:14:01.921384 | orchestrator | 2025-05-19 22:14:01.921409 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:14:01.921421 | orchestrator | Monday 19 May 2025 22:11:56 +0000 (0:00:00.291) 0:00:00.544 ************ 2025-05-19 22:14:01.921435 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-19 22:14:01.921454 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-19 22:14:01.921485 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-19 22:14:01.921530 | orchestrator | 2025-05-19 22:14:01.921582 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-19 22:14:01.921601 | orchestrator | 2025-05-19 22:14:01.921612 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 22:14:01.921623 | orchestrator | Monday 19 May 2025 22:11:57 +0000 (0:00:00.444) 0:00:00.989 ************ 2025-05-19 22:14:01.921635 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:14:01.921647 | orchestrator | 2025-05-19 22:14:01.921659 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-19 22:14:01.921672 | orchestrator | Monday 19 May 2025 22:11:57 +0000 (0:00:00.523) 0:00:01.513 ************ 2025-05-19 22:14:01.921686 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-19 22:14:01.921698 | orchestrator | 2025-05-19 22:14:01.921710 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-19 22:14:01.921722 | orchestrator | Monday 19 May 2025 22:12:01 +0000 (0:00:03.125) 0:00:04.639 ************ 2025-05-19 22:14:01.921735 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-19 22:14:01.921748 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-19 22:14:01.921761 | orchestrator | 2025-05-19 22:14:01.921774 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-19 22:14:01.921786 | orchestrator | Monday 19 May 2025 22:12:07 +0000 (0:00:06.255) 0:00:10.894 ************ 2025-05-19 22:14:01.921799 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:14:01.921811 | orchestrator | 2025-05-19 22:14:01.921824 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-19 22:14:01.921837 | orchestrator | Monday 19 May 2025 22:12:10 +0000 (0:00:03.333) 0:00:14.228 ************ 2025-05-19 22:14:01.921849 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:14:01.921860 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-19 22:14:01.921870 | orchestrator | 2025-05-19 22:14:01.921881 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-19 22:14:01.921892 | orchestrator | Monday 19 May 2025 22:12:14 +0000 (0:00:04.130) 0:00:18.359 ************ 2025-05-19 22:14:01.921903 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:14:01.921914 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-19 22:14:01.921925 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-19 22:14:01.921936 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-19 22:14:01.921946 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-19 22:14:01.921958 | orchestrator | 2025-05-19 22:14:01.921969 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-19 22:14:01.921979 | orchestrator | Monday 19 May 2025 22:12:29 +0000 (0:00:14.889) 0:00:33.249 ************ 2025-05-19 22:14:01.921990 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-19 22:14:01.922001 | orchestrator | 2025-05-19 22:14:01.922012 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-19 22:14:01.922078 | orchestrator | Monday 19 May 2025 22:12:33 +0000 (0:00:03.832) 0:00:37.082 ************ 2025-05-19 22:14:01.922093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922295 | orchestrator | 2025-05-19 22:14:01.922306 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-19 22:14:01.922317 | orchestrator | Monday 19 May 2025 22:12:36 +0000 (0:00:02.678) 0:00:39.760 ************ 2025-05-19 22:14:01.922329 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-19 22:14:01.922340 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-19 22:14:01.922351 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-19 22:14:01.922362 | orchestrator | 2025-05-19 22:14:01.922373 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-19 22:14:01.922384 | orchestrator | Monday 19 May 2025 22:12:37 +0000 (0:00:01.402) 0:00:41.163 ************ 2025-05-19 22:14:01.922395 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.922406 | orchestrator | 2025-05-19 22:14:01.922417 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-19 22:14:01.922428 | orchestrator | Monday 19 May 2025 22:12:37 +0000 (0:00:00.115) 0:00:41.278 ************ 2025-05-19 22:14:01.922439 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.922450 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.922461 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.922472 | orchestrator | 2025-05-19 22:14:01.922483 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 22:14:01.922494 | orchestrator | Monday 19 May 2025 22:12:38 +0000 (0:00:00.371) 0:00:41.650 ************ 2025-05-19 22:14:01.922529 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:14:01.922551 | orchestrator | 2025-05-19 22:14:01.922571 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-19 22:14:01.922589 | orchestrator | Monday 19 May 2025 22:12:38 +0000 (0:00:00.477) 0:00:42.128 ************ 2025-05-19 22:14:01.922604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.922662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.922750 | orchestrator | 2025-05-19 22:14:01.922761 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-19 22:14:01.922772 | orchestrator | Monday 19 May 2025 22:12:42 +0000 (0:00:04.370) 0:00:46.498 ************ 2025-05-19 22:14:01.922784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.922795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922825 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.922843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.922860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922883 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.922895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.922913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.922936 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.922947 | orchestrator | 2025-05-19 22:14:01.922959 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-19 22:14:01.922970 | orchestrator | Monday 19 May 2025 22:12:45 +0000 (0:00:02.235) 0:00:48.734 ************ 2025-05-19 22:14:01.922993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923029 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.923053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923088 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.923111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923153 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.923164 | orchestrator | 2025-05-19 22:14:01.923176 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-19 22:14:01.923187 | orchestrator | Monday 19 May 2025 22:12:46 +0000 (0:00:01.332) 0:00:50.066 ************ 2025-05-19 22:14:01.923198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 2025-05-19 22:14:01 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:01.923229 | orchestrator | 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923361 | orchestrator | 2025-05-19 22:14:01.923373 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-19 22:14:01.923384 | orchestrator | Monday 19 May 2025 22:12:50 +0000 (0:00:03.976) 0:00:54.043 ************ 2025-05-19 22:14:01.923395 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.923406 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:14:01.923417 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:14:01.923428 | orchestrator | 2025-05-19 22:14:01.923439 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-19 22:14:01.923457 | orchestrator | Monday 19 May 2025 22:12:53 +0000 (0:00:03.067) 0:00:57.110 ************ 2025-05-19 22:14:01.923468 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:14:01.923478 | orchestrator | 2025-05-19 22:14:01.923489 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-19 22:14:01.923500 | orchestrator | Monday 19 May 2025 22:12:55 +0000 (0:00:01.848) 0:00:58.958 ************ 2025-05-19 22:14:01.923533 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.923545 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.923556 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.923567 | orchestrator | 2025-05-19 22:14:01.923578 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-19 22:14:01.923589 | orchestrator | Monday 19 May 2025 22:12:56 +0000 (0:00:01.167) 0:01:00.125 ************ 2025-05-19 22:14:01.923600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.923725 | orchestrator | 2025-05-19 22:14:01.923742 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-19 22:14:01.923754 | orchestrator | Monday 19 May 2025 22:13:06 +0000 (0:00:09.933) 0:01:10.059 ************ 2025-05-19 22:14:01.923770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923812 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.923823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923877 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.923889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-19 22:14:01.923901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:14:01.923923 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.923934 | orchestrator | 2025-05-19 22:14:01.923946 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-19 22:14:01.923957 | orchestrator | Monday 19 May 2025 22:13:08 +0000 (0:00:01.605) 0:01:11.665 ************ 2025-05-19 22:14:01.923968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.923991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.924010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-19 22:14:01.924022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:14:01.924161 | orchestrator | 2025-05-19 22:14:01.924172 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-19 22:14:01.924183 | orchestrator | Monday 19 May 2025 22:13:10 +0000 (0:00:02.675) 0:01:14.340 ************ 2025-05-19 22:14:01.924194 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:14:01.924206 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:14:01.924217 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:14:01.924228 | orchestrator | 2025-05-19 22:14:01.924239 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-19 22:14:01.924250 | orchestrator | Monday 19 May 2025 22:13:11 +0000 (0:00:00.399) 0:01:14.739 ************ 2025-05-19 22:14:01.924261 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924272 | orchestrator | 2025-05-19 22:14:01.924283 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-19 22:14:01.924294 | orchestrator | Monday 19 May 2025 22:13:13 +0000 (0:00:01.932) 0:01:16.672 ************ 2025-05-19 22:14:01.924305 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924316 | orchestrator | 2025-05-19 22:14:01.924327 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-19 22:14:01.924338 | orchestrator | Monday 19 May 2025 22:13:15 +0000 (0:00:02.567) 0:01:19.239 ************ 2025-05-19 22:14:01.924349 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924360 | orchestrator | 2025-05-19 22:14:01.924371 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 22:14:01.924382 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:12.418) 0:01:31.658 ************ 2025-05-19 22:14:01.924393 | orchestrator | 2025-05-19 22:14:01.924404 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 22:14:01.924415 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:00.145) 0:01:31.804 ************ 2025-05-19 22:14:01.924425 | orchestrator | 2025-05-19 22:14:01.924436 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-19 22:14:01.924447 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:00.155) 0:01:31.960 ************ 2025-05-19 22:14:01.924458 | orchestrator | 2025-05-19 22:14:01.924469 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-19 22:14:01.924480 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:00.212) 0:01:32.172 ************ 2025-05-19 22:14:01.924491 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924502 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:14:01.924546 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:14:01.924558 | orchestrator | 2025-05-19 22:14:01.924569 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-19 22:14:01.924588 | orchestrator | Monday 19 May 2025 22:13:39 +0000 (0:00:11.353) 0:01:43.525 ************ 2025-05-19 22:14:01.924599 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924610 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:14:01.924621 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:14:01.924632 | orchestrator | 2025-05-19 22:14:01.924642 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-19 22:14:01.924653 | orchestrator | Monday 19 May 2025 22:13:48 +0000 (0:00:08.076) 0:01:51.602 ************ 2025-05-19 22:14:01.924664 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:14:01.924675 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:14:01.924686 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:14:01.924702 | orchestrator | 2025-05-19 22:14:01.924722 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:14:01.924752 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:14:01.924774 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:14:01.924793 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:14:01.924812 | orchestrator | 2025-05-19 22:14:01.924832 | orchestrator | 2025-05-19 22:14:01.924850 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:14:01.924870 | orchestrator | Monday 19 May 2025 22:14:00 +0000 (0:00:12.391) 0:02:03.993 ************ 2025-05-19 22:14:01.924890 | orchestrator | =============================================================================== 2025-05-19 22:14:01.924923 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.89s 2025-05-19 22:14:01.924944 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.42s 2025-05-19 22:14:01.924964 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.39s 2025-05-19 22:14:01.924984 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.35s 2025-05-19 22:14:01.925004 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.93s 2025-05-19 22:14:01.925033 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.08s 2025-05-19 22:14:01.925053 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.26s 2025-05-19 22:14:01.925064 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.37s 2025-05-19 22:14:01.925075 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.13s 2025-05-19 22:14:01.925086 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.98s 2025-05-19 22:14:01.925096 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.83s 2025-05-19 22:14:01.925107 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.33s 2025-05-19 22:14:01.925118 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.13s 2025-05-19 22:14:01.925129 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.07s 2025-05-19 22:14:01.925140 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.68s 2025-05-19 22:14:01.925150 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.68s 2025-05-19 22:14:01.925161 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.57s 2025-05-19 22:14:01.925213 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.24s 2025-05-19 22:14:01.925225 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.93s 2025-05-19 22:14:01.925235 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.85s 2025-05-19 22:14:01.925257 | orchestrator | 2025-05-19 22:14:01 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:01.925268 | orchestrator | 2025-05-19 22:14:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:04.958850 | orchestrator | 2025-05-19 22:14:04 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:04.958958 | orchestrator | 2025-05-19 22:14:04 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:04.959393 | orchestrator | 2025-05-19 22:14:04 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:04.959822 | orchestrator | 2025-05-19 22:14:04 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:04.959849 | orchestrator | 2025-05-19 22:14:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:07.993234 | orchestrator | 2025-05-19 22:14:07 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:07.995560 | orchestrator | 2025-05-19 22:14:07 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:07.996119 | orchestrator | 2025-05-19 22:14:07 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:07.997837 | orchestrator | 2025-05-19 22:14:07 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:07.997860 | orchestrator | 2025-05-19 22:14:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:11.026982 | orchestrator | 2025-05-19 22:14:11 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:11.027157 | orchestrator | 2025-05-19 22:14:11 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:11.028208 | orchestrator | 2025-05-19 22:14:11 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:11.028585 | orchestrator | 2025-05-19 22:14:11 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:11.028603 | orchestrator | 2025-05-19 22:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:14.088296 | orchestrator | 2025-05-19 22:14:14 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:14.090091 | orchestrator | 2025-05-19 22:14:14 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:14.091907 | orchestrator | 2025-05-19 22:14:14 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:14.095401 | orchestrator | 2025-05-19 22:14:14 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:14.099578 | orchestrator | 2025-05-19 22:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:17.131952 | orchestrator | 2025-05-19 22:14:17 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:17.132057 | orchestrator | 2025-05-19 22:14:17 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:17.132512 | orchestrator | 2025-05-19 22:14:17 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:17.133003 | orchestrator | 2025-05-19 22:14:17 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:17.133024 | orchestrator | 2025-05-19 22:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:20.163989 | orchestrator | 2025-05-19 22:14:20 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:20.164087 | orchestrator | 2025-05-19 22:14:20 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:20.164679 | orchestrator | 2025-05-19 22:14:20 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:20.164960 | orchestrator | 2025-05-19 22:14:20 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:20.164985 | orchestrator | 2025-05-19 22:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:23.201384 | orchestrator | 2025-05-19 22:14:23 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:23.201869 | orchestrator | 2025-05-19 22:14:23 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:23.203661 | orchestrator | 2025-05-19 22:14:23 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:23.203699 | orchestrator | 2025-05-19 22:14:23 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:23.203712 | orchestrator | 2025-05-19 22:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:26.246527 | orchestrator | 2025-05-19 22:14:26 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:26.247766 | orchestrator | 2025-05-19 22:14:26 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:26.247858 | orchestrator | 2025-05-19 22:14:26 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:26.251363 | orchestrator | 2025-05-19 22:14:26 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:26.251416 | orchestrator | 2025-05-19 22:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:29.299533 | orchestrator | 2025-05-19 22:14:29 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:29.303761 | orchestrator | 2025-05-19 22:14:29 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:29.304393 | orchestrator | 2025-05-19 22:14:29 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:29.305274 | orchestrator | 2025-05-19 22:14:29 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:29.305638 | orchestrator | 2025-05-19 22:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:32.376363 | orchestrator | 2025-05-19 22:14:32 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:32.376562 | orchestrator | 2025-05-19 22:14:32 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:32.376582 | orchestrator | 2025-05-19 22:14:32 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:32.376595 | orchestrator | 2025-05-19 22:14:32 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:32.376768 | orchestrator | 2025-05-19 22:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:35.439161 | orchestrator | 2025-05-19 22:14:35 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:35.441707 | orchestrator | 2025-05-19 22:14:35 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:35.441744 | orchestrator | 2025-05-19 22:14:35 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:35.442248 | orchestrator | 2025-05-19 22:14:35 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:35.442274 | orchestrator | 2025-05-19 22:14:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:38.471128 | orchestrator | 2025-05-19 22:14:38 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:38.471630 | orchestrator | 2025-05-19 22:14:38 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:38.472980 | orchestrator | 2025-05-19 22:14:38 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:38.473215 | orchestrator | 2025-05-19 22:14:38 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:38.473426 | orchestrator | 2025-05-19 22:14:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:41.517864 | orchestrator | 2025-05-19 22:14:41 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:41.519301 | orchestrator | 2025-05-19 22:14:41 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state STARTED 2025-05-19 22:14:41.520708 | orchestrator | 2025-05-19 22:14:41 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:41.522213 | orchestrator | 2025-05-19 22:14:41 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:41.522238 | orchestrator | 2025-05-19 22:14:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:44.561951 | orchestrator | 2025-05-19 22:14:44 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:44.562169 | orchestrator | 2025-05-19 22:14:44 | INFO  | Task e966eeb6-880d-431a-a0be-408dbcd9f3c3 is in state SUCCESS 2025-05-19 22:14:44.562678 | orchestrator | 2025-05-19 22:14:44 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:44.563519 | orchestrator | 2025-05-19 22:14:44 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:44.564191 | orchestrator | 2025-05-19 22:14:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:47.597043 | orchestrator | 2025-05-19 22:14:47 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:47.597158 | orchestrator | 2025-05-19 22:14:47 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:47.599311 | orchestrator | 2025-05-19 22:14:47 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:47.599646 | orchestrator | 2025-05-19 22:14:47 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:14:47.600392 | orchestrator | 2025-05-19 22:14:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:50.660214 | orchestrator | 2025-05-19 22:14:50 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:50.663061 | orchestrator | 2025-05-19 22:14:50 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:50.665382 | orchestrator | 2025-05-19 22:14:50 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:50.667842 | orchestrator | 2025-05-19 22:14:50 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:14:50.667888 | orchestrator | 2025-05-19 22:14:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:53.716641 | orchestrator | 2025-05-19 22:14:53 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:53.720781 | orchestrator | 2025-05-19 22:14:53 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:53.721505 | orchestrator | 2025-05-19 22:14:53 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:53.723530 | orchestrator | 2025-05-19 22:14:53 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:14:53.724384 | orchestrator | 2025-05-19 22:14:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:56.772496 | orchestrator | 2025-05-19 22:14:56 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:56.774094 | orchestrator | 2025-05-19 22:14:56 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:56.775930 | orchestrator | 2025-05-19 22:14:56 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:56.778216 | orchestrator | 2025-05-19 22:14:56 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:14:56.778242 | orchestrator | 2025-05-19 22:14:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:14:59.821000 | orchestrator | 2025-05-19 22:14:59 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:14:59.821958 | orchestrator | 2025-05-19 22:14:59 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:14:59.822261 | orchestrator | 2025-05-19 22:14:59 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:14:59.823029 | orchestrator | 2025-05-19 22:14:59 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:14:59.823075 | orchestrator | 2025-05-19 22:14:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:02.863583 | orchestrator | 2025-05-19 22:15:02 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:02.863909 | orchestrator | 2025-05-19 22:15:02 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:02.865629 | orchestrator | 2025-05-19 22:15:02 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:02.867598 | orchestrator | 2025-05-19 22:15:02 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:02.867614 | orchestrator | 2025-05-19 22:15:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:05.913148 | orchestrator | 2025-05-19 22:15:05 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:05.913363 | orchestrator | 2025-05-19 22:15:05 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:05.915651 | orchestrator | 2025-05-19 22:15:05 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:05.920751 | orchestrator | 2025-05-19 22:15:05 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:05.920810 | orchestrator | 2025-05-19 22:15:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:08.947434 | orchestrator | 2025-05-19 22:15:08 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:08.947524 | orchestrator | 2025-05-19 22:15:08 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:08.947770 | orchestrator | 2025-05-19 22:15:08 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:08.948590 | orchestrator | 2025-05-19 22:15:08 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:08.948610 | orchestrator | 2025-05-19 22:15:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:11.992917 | orchestrator | 2025-05-19 22:15:11 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:11.996367 | orchestrator | 2025-05-19 22:15:11 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:11.998297 | orchestrator | 2025-05-19 22:15:11 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:12.002156 | orchestrator | 2025-05-19 22:15:12 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:12.002207 | orchestrator | 2025-05-19 22:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:15.041275 | orchestrator | 2025-05-19 22:15:15 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:15.041770 | orchestrator | 2025-05-19 22:15:15 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:15.042720 | orchestrator | 2025-05-19 22:15:15 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:15.043551 | orchestrator | 2025-05-19 22:15:15 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:15.043785 | orchestrator | 2025-05-19 22:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:18.088040 | orchestrator | 2025-05-19 22:15:18 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:18.088437 | orchestrator | 2025-05-19 22:15:18 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:18.090553 | orchestrator | 2025-05-19 22:15:18 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:18.091677 | orchestrator | 2025-05-19 22:15:18 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:18.091853 | orchestrator | 2025-05-19 22:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:21.145279 | orchestrator | 2025-05-19 22:15:21 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:21.145644 | orchestrator | 2025-05-19 22:15:21 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:21.146276 | orchestrator | 2025-05-19 22:15:21 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:21.147062 | orchestrator | 2025-05-19 22:15:21 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:21.147104 | orchestrator | 2025-05-19 22:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:24.194660 | orchestrator | 2025-05-19 22:15:24 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:24.194849 | orchestrator | 2025-05-19 22:15:24 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:24.195504 | orchestrator | 2025-05-19 22:15:24 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:24.196489 | orchestrator | 2025-05-19 22:15:24 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:24.196540 | orchestrator | 2025-05-19 22:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:27.237445 | orchestrator | 2025-05-19 22:15:27 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:27.237924 | orchestrator | 2025-05-19 22:15:27 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:27.238691 | orchestrator | 2025-05-19 22:15:27 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:27.239614 | orchestrator | 2025-05-19 22:15:27 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:27.239665 | orchestrator | 2025-05-19 22:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:30.289995 | orchestrator | 2025-05-19 22:15:30 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:30.291277 | orchestrator | 2025-05-19 22:15:30 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:30.293293 | orchestrator | 2025-05-19 22:15:30 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:30.296115 | orchestrator | 2025-05-19 22:15:30 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:30.296165 | orchestrator | 2025-05-19 22:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:33.347570 | orchestrator | 2025-05-19 22:15:33 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:33.347971 | orchestrator | 2025-05-19 22:15:33 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:33.348701 | orchestrator | 2025-05-19 22:15:33 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:33.349578 | orchestrator | 2025-05-19 22:15:33 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:33.349610 | orchestrator | 2025-05-19 22:15:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:36.379542 | orchestrator | 2025-05-19 22:15:36 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:36.379877 | orchestrator | 2025-05-19 22:15:36 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:36.380615 | orchestrator | 2025-05-19 22:15:36 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:36.381738 | orchestrator | 2025-05-19 22:15:36 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:36.381888 | orchestrator | 2025-05-19 22:15:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:39.432605 | orchestrator | 2025-05-19 22:15:39 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:39.433905 | orchestrator | 2025-05-19 22:15:39 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:39.434548 | orchestrator | 2025-05-19 22:15:39 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:39.435352 | orchestrator | 2025-05-19 22:15:39 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:39.435384 | orchestrator | 2025-05-19 22:15:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:42.483734 | orchestrator | 2025-05-19 22:15:42 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:42.485366 | orchestrator | 2025-05-19 22:15:42 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:42.487033 | orchestrator | 2025-05-19 22:15:42 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:42.488556 | orchestrator | 2025-05-19 22:15:42 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:42.488583 | orchestrator | 2025-05-19 22:15:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:45.534638 | orchestrator | 2025-05-19 22:15:45 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:45.534832 | orchestrator | 2025-05-19 22:15:45 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:45.536343 | orchestrator | 2025-05-19 22:15:45 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:45.537163 | orchestrator | 2025-05-19 22:15:45 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:45.537216 | orchestrator | 2025-05-19 22:15:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:48.593484 | orchestrator | 2025-05-19 22:15:48 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:48.593686 | orchestrator | 2025-05-19 22:15:48 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:48.594589 | orchestrator | 2025-05-19 22:15:48 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:48.596496 | orchestrator | 2025-05-19 22:15:48 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:48.596522 | orchestrator | 2025-05-19 22:15:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:51.636907 | orchestrator | 2025-05-19 22:15:51 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:51.637027 | orchestrator | 2025-05-19 22:15:51 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:51.637879 | orchestrator | 2025-05-19 22:15:51 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:51.638685 | orchestrator | 2025-05-19 22:15:51 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:51.639043 | orchestrator | 2025-05-19 22:15:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:54.717230 | orchestrator | 2025-05-19 22:15:54 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:54.720869 | orchestrator | 2025-05-19 22:15:54 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state STARTED 2025-05-19 22:15:54.723412 | orchestrator | 2025-05-19 22:15:54 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:54.725046 | orchestrator | 2025-05-19 22:15:54 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:54.725078 | orchestrator | 2025-05-19 22:15:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:15:57.795096 | orchestrator | 2025-05-19 22:15:57 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:15:57.797686 | orchestrator | 2025-05-19 22:15:57 | INFO  | Task a9b3a52f-ccb3-4625-be23-09ba1cdf3c40 is in state SUCCESS 2025-05-19 22:15:57.798564 | orchestrator | 2025-05-19 22:15:57.798610 | orchestrator | 2025-05-19 22:15:57.798631 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-19 22:15:57.798722 | orchestrator | 2025-05-19 22:15:57.798748 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-19 22:15:57.798769 | orchestrator | Monday 19 May 2025 22:14:06 +0000 (0:00:00.161) 0:00:00.161 ************ 2025-05-19 22:15:57.798790 | orchestrator | changed: [localhost] 2025-05-19 22:15:57.798872 | orchestrator | 2025-05-19 22:15:57.798894 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-19 22:15:57.798913 | orchestrator | Monday 19 May 2025 22:14:08 +0000 (0:00:01.678) 0:00:01.839 ************ 2025-05-19 22:15:57.798933 | orchestrator | changed: [localhost] 2025-05-19 22:15:57.798953 | orchestrator | 2025-05-19 22:15:57.798974 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-19 22:15:57.798992 | orchestrator | Monday 19 May 2025 22:14:38 +0000 (0:00:30.285) 0:00:32.125 ************ 2025-05-19 22:15:57.799011 | orchestrator | changed: [localhost] 2025-05-19 22:15:57.799032 | orchestrator | 2025-05-19 22:15:57.799054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:15:57.799074 | orchestrator | 2025-05-19 22:15:57.799094 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:15:57.799148 | orchestrator | Monday 19 May 2025 22:14:42 +0000 (0:00:03.780) 0:00:35.905 ************ 2025-05-19 22:15:57.799168 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:15:57.799187 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:15:57.799207 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:15:57.799229 | orchestrator | 2025-05-19 22:15:57.799251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:15:57.799271 | orchestrator | Monday 19 May 2025 22:14:42 +0000 (0:00:00.271) 0:00:36.177 ************ 2025-05-19 22:15:57.799334 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-19 22:15:57.799356 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-19 22:15:57.799372 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-19 22:15:57.799385 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-19 22:15:57.799397 | orchestrator | 2025-05-19 22:15:57.799409 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-19 22:15:57.799421 | orchestrator | skipping: no hosts matched 2025-05-19 22:15:57.799435 | orchestrator | 2025-05-19 22:15:57.799464 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:15:57.799477 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:15:57.799492 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:15:57.799507 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:15:57.799519 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:15:57.799531 | orchestrator | 2025-05-19 22:15:57.799543 | orchestrator | 2025-05-19 22:15:57.799554 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:15:57.799565 | orchestrator | Monday 19 May 2025 22:14:43 +0000 (0:00:00.662) 0:00:36.839 ************ 2025-05-19 22:15:57.799575 | orchestrator | =============================================================================== 2025-05-19 22:15:57.799593 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 30.29s 2025-05-19 22:15:57.799610 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.78s 2025-05-19 22:15:57.799627 | orchestrator | Ensure the destination directory exists --------------------------------- 1.68s 2025-05-19 22:15:57.799644 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-05-19 22:15:57.799662 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-05-19 22:15:57.799679 | orchestrator | 2025-05-19 22:15:57.801089 | orchestrator | 2025-05-19 22:15:57.801191 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:15:57.801215 | orchestrator | 2025-05-19 22:15:57.801235 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:15:57.801255 | orchestrator | Monday 19 May 2025 22:11:26 +0000 (0:00:00.355) 0:00:00.355 ************ 2025-05-19 22:15:57.801273 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:15:57.801318 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:15:57.801337 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:15:57.801357 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:15:57.801376 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:15:57.801396 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:15:57.801413 | orchestrator | 2025-05-19 22:15:57.801433 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:15:57.801452 | orchestrator | Monday 19 May 2025 22:11:27 +0000 (0:00:00.821) 0:00:01.177 ************ 2025-05-19 22:15:57.801471 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-19 22:15:57.801491 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-19 22:15:57.801574 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-19 22:15:57.801587 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-19 22:15:57.801598 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-19 22:15:57.801609 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-19 22:15:57.801635 | orchestrator | 2025-05-19 22:15:57.801647 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-19 22:15:57.801658 | orchestrator | 2025-05-19 22:15:57.801671 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 22:15:57.801683 | orchestrator | Monday 19 May 2025 22:11:27 +0000 (0:00:00.640) 0:00:01.817 ************ 2025-05-19 22:15:57.801696 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:15:57.801710 | orchestrator | 2025-05-19 22:15:57.801817 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-19 22:15:57.801830 | orchestrator | Monday 19 May 2025 22:11:29 +0000 (0:00:01.243) 0:00:03.060 ************ 2025-05-19 22:15:57.801842 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:15:57.801854 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:15:57.801866 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:15:57.801878 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:15:57.801891 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:15:57.801902 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:15:57.801914 | orchestrator | 2025-05-19 22:15:57.801926 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-19 22:15:57.801938 | orchestrator | Monday 19 May 2025 22:11:30 +0000 (0:00:01.353) 0:00:04.414 ************ 2025-05-19 22:15:57.801951 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:15:57.801963 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:15:57.801975 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:15:57.801987 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:15:57.801998 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:15:57.802008 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:15:57.802089 | orchestrator | 2025-05-19 22:15:57.802101 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-19 22:15:57.802112 | orchestrator | Monday 19 May 2025 22:11:31 +0000 (0:00:01.409) 0:00:05.824 ************ 2025-05-19 22:15:57.802123 | orchestrator | ok: [testbed-node-0] => { 2025-05-19 22:15:57.802135 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802150 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802168 | orchestrator | } 2025-05-19 22:15:57.802187 | orchestrator | ok: [testbed-node-1] => { 2025-05-19 22:15:57.802206 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802224 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802243 | orchestrator | } 2025-05-19 22:15:57.802260 | orchestrator | ok: [testbed-node-2] => { 2025-05-19 22:15:57.802304 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802323 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802341 | orchestrator | } 2025-05-19 22:15:57.802359 | orchestrator | ok: [testbed-node-3] => { 2025-05-19 22:15:57.802378 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802396 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802415 | orchestrator | } 2025-05-19 22:15:57.802435 | orchestrator | ok: [testbed-node-4] => { 2025-05-19 22:15:57.802453 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802484 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802495 | orchestrator | } 2025-05-19 22:15:57.802506 | orchestrator | ok: [testbed-node-5] => { 2025-05-19 22:15:57.802517 | orchestrator |  "changed": false, 2025-05-19 22:15:57.802527 | orchestrator |  "msg": "All assertions passed" 2025-05-19 22:15:57.802538 | orchestrator | } 2025-05-19 22:15:57.802549 | orchestrator | 2025-05-19 22:15:57.802560 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-19 22:15:57.802571 | orchestrator | Monday 19 May 2025 22:11:33 +0000 (0:00:01.256) 0:00:07.080 ************ 2025-05-19 22:15:57.802594 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.802604 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.802615 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.802626 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.802636 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.802647 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.802657 | orchestrator | 2025-05-19 22:15:57.802668 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-19 22:15:57.802679 | orchestrator | Monday 19 May 2025 22:11:34 +0000 (0:00:00.836) 0:00:07.916 ************ 2025-05-19 22:15:57.802690 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-19 22:15:57.802700 | orchestrator | 2025-05-19 22:15:57.802711 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-19 22:15:57.802722 | orchestrator | Monday 19 May 2025 22:11:37 +0000 (0:00:03.485) 0:00:11.402 ************ 2025-05-19 22:15:57.802733 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-19 22:15:57.802745 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-19 22:15:57.802756 | orchestrator | 2025-05-19 22:15:57.802795 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-19 22:15:57.802806 | orchestrator | Monday 19 May 2025 22:11:43 +0000 (0:00:05.889) 0:00:17.291 ************ 2025-05-19 22:15:57.802817 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:15:57.802828 | orchestrator | 2025-05-19 22:15:57.802838 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-19 22:15:57.802849 | orchestrator | Monday 19 May 2025 22:11:46 +0000 (0:00:03.072) 0:00:20.364 ************ 2025-05-19 22:15:57.802860 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:15:57.802870 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-19 22:15:57.802882 | orchestrator | 2025-05-19 22:15:57.802892 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-19 22:15:57.802903 | orchestrator | Monday 19 May 2025 22:11:50 +0000 (0:00:03.642) 0:00:24.007 ************ 2025-05-19 22:15:57.802914 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:15:57.802925 | orchestrator | 2025-05-19 22:15:57.802935 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-19 22:15:57.802946 | orchestrator | Monday 19 May 2025 22:11:53 +0000 (0:00:03.152) 0:00:27.159 ************ 2025-05-19 22:15:57.802957 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-19 22:15:57.802967 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-19 22:15:57.802978 | orchestrator | 2025-05-19 22:15:57.802988 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 22:15:57.802999 | orchestrator | Monday 19 May 2025 22:12:00 +0000 (0:00:07.125) 0:00:34.285 ************ 2025-05-19 22:15:57.803010 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.803021 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.803031 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.803042 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.803052 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.803063 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.803074 | orchestrator | 2025-05-19 22:15:57.803084 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-19 22:15:57.803095 | orchestrator | Monday 19 May 2025 22:12:01 +0000 (0:00:00.589) 0:00:34.875 ************ 2025-05-19 22:15:57.803106 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.803116 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.803127 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.803138 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.803148 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.803166 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.803177 | orchestrator | 2025-05-19 22:15:57.803188 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-19 22:15:57.803198 | orchestrator | Monday 19 May 2025 22:12:04 +0000 (0:00:03.101) 0:00:37.976 ************ 2025-05-19 22:15:57.803209 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:15:57.803220 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:15:57.803230 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:15:57.803241 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:15:57.803252 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:15:57.803262 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:15:57.803273 | orchestrator | 2025-05-19 22:15:57.803319 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-19 22:15:57.803331 | orchestrator | Monday 19 May 2025 22:12:06 +0000 (0:00:01.961) 0:00:39.938 ************ 2025-05-19 22:15:57.803342 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.803353 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.803363 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.803374 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.803385 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.803396 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.803406 | orchestrator | 2025-05-19 22:15:57.803418 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-19 22:15:57.803429 | orchestrator | Monday 19 May 2025 22:12:08 +0000 (0:00:02.691) 0:00:42.630 ************ 2025-05-19 22:15:57.803450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803512 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803525 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803542 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803553 | orchestrator | 2025-05-19 22:15:57.803564 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-19 22:15:57.803576 | orchestrator | Monday 19 May 2025 22:12:11 +0000 (0:00:02.855) 0:00:45.485 ************ 2025-05-19 22:15:57.803587 | orchestrator | [WARNING]: Skipped 2025-05-19 22:15:57.803598 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-19 22:15:57.803610 | orchestrator | due to this access issue: 2025-05-19 22:15:57.803621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-19 22:15:57.803632 | orchestrator | a directory 2025-05-19 22:15:57.803642 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:15:57.803653 | orchestrator | 2025-05-19 22:15:57.803664 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 22:15:57.803681 | orchestrator | Monday 19 May 2025 22:12:12 +0000 (0:00:00.726) 0:00:46.212 ************ 2025-05-19 22:15:57.803693 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:15:57.803705 | orchestrator | 2025-05-19 22:15:57.803716 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-19 22:15:57.803727 | orchestrator | Monday 19 May 2025 22:12:13 +0000 (0:00:01.090) 0:00:47.303 ************ 2025-05-19 22:15:57.803738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.803786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.803836 | orchestrator | 2025-05-19 22:15:57.803848 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-19 22:15:57.803859 | orchestrator | Monday 19 May 2025 22:12:16 +0000 (0:00:03.225) 0:00:50.528 ************ 2025-05-19 22:15:57.803870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.803882 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.803898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.803910 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.803921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.803939 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.803958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.803970 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.803982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.803993 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.804004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804015 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.804026 | orchestrator | 2025-05-19 22:15:57.804037 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-19 22:15:57.804048 | orchestrator | Monday 19 May 2025 22:12:19 +0000 (0:00:02.773) 0:00:53.301 ************ 2025-05-19 22:15:57.804070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804081 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.804101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804119 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.804131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804142 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.804153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804164 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.804175 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804186 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.804202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804220 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.804231 | orchestrator | 2025-05-19 22:15:57.804242 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-19 22:15:57.804252 | orchestrator | Monday 19 May 2025 22:12:22 +0000 (0:00:02.714) 0:00:56.016 ************ 2025-05-19 22:15:57.804263 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.804274 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.804375 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.804386 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.804397 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.804408 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.804419 | orchestrator | 2025-05-19 22:15:57.804429 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-19 22:15:57.804448 | orchestrator | Monday 19 May 2025 22:12:24 +0000 (0:00:02.324) 0:00:58.340 ************ 2025-05-19 22:15:57.804459 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.804470 | orchestrator | 2025-05-19 22:15:57.804482 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-19 22:15:57.804492 | orchestrator | Monday 19 May 2025 22:12:24 +0000 (0:00:00.131) 0:00:58.472 ************ 2025-05-19 22:15:57.804503 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.804515 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.804525 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.804536 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.804547 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.804557 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.804568 | orchestrator | 2025-05-19 22:15:57.804578 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-19 22:15:57.804589 | orchestrator | Monday 19 May 2025 22:12:25 +0000 (0:00:00.829) 0:00:59.301 ************ 2025-05-19 22:15:57.804601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804612 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.804623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804633 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.804649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.804666 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.804682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804692 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.804702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804712 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.804722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.804732 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.804741 | orchestrator | 2025-05-19 22:15:57.804751 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-19 22:15:57.804761 | orchestrator | Monday 19 May 2025 22:12:28 +0000 (0:00:02.902) 0:01:02.204 ************ 2025-05-19 22:15:57.804771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804870 | orchestrator | 2025-05-19 22:15:57.804880 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-19 22:15:57.804889 | orchestrator | Monday 19 May 2025 22:12:31 +0000 (0:00:03.402) 0:01:05.607 ************ 2025-05-19 22:15:57.804904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.804943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804964 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.804985 | orchestrator | 2025-05-19 22:15:57.804995 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-19 22:15:57.805004 | orchestrator | Monday 19 May 2025 22:12:38 +0000 (0:00:06.571) 0:01:12.178 ************ 2025-05-19 22:15:57.805022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805032 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805052 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805078 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805170 | orchestrator | 2025-05-19 22:15:57.805185 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-19 22:15:57.805200 | orchestrator | Monday 19 May 2025 22:12:42 +0000 (0:00:03.805) 0:01:15.983 ************ 2025-05-19 22:15:57.805216 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805231 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:15:57.805245 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805259 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805347 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:15:57.805368 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:15:57.805384 | orchestrator | 2025-05-19 22:15:57.805398 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-19 22:15:57.805408 | orchestrator | Monday 19 May 2025 22:12:45 +0000 (0:00:03.786) 0:01:19.769 ************ 2025-05-19 22:15:57.805428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805438 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805459 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805478 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.805489 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.805545 | orchestrator | 2025-05-19 22:15:57.805555 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-19 22:15:57.805565 | orchestrator | Monday 19 May 2025 22:12:50 +0000 (0:00:04.642) 0:01:24.412 ************ 2025-05-19 22:15:57.805574 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.805584 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805594 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.805604 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.805614 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805623 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805632 | orchestrator | 2025-05-19 22:15:57.805642 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-19 22:15:57.805652 | orchestrator | Monday 19 May 2025 22:12:53 +0000 (0:00:03.376) 0:01:27.788 ************ 2025-05-19 22:15:57.805662 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.805671 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.805681 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805690 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805700 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805714 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.805724 | orchestrator | 2025-05-19 22:15:57.805733 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-19 22:15:57.805743 | orchestrator | Monday 19 May 2025 22:12:57 +0000 (0:00:03.492) 0:01:31.280 ************ 2025-05-19 22:15:57.805752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.805762 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.805772 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805781 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.805791 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805800 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805810 | orchestrator | 2025-05-19 22:15:57.805819 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-19 22:15:57.805829 | orchestrator | Monday 19 May 2025 22:13:00 +0000 (0:00:03.049) 0:01:34.330 ************ 2025-05-19 22:15:57.805839 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.805848 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.805858 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805867 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.805877 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805886 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805896 | orchestrator | 2025-05-19 22:15:57.805905 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-19 22:15:57.805915 | orchestrator | Monday 19 May 2025 22:13:03 +0000 (0:00:03.148) 0:01:37.478 ************ 2025-05-19 22:15:57.805925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.805941 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.805951 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.805960 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.805970 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.805980 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.805989 | orchestrator | 2025-05-19 22:15:57.806004 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-19 22:15:57.806064 | orchestrator | Monday 19 May 2025 22:13:05 +0000 (0:00:02.263) 0:01:39.742 ************ 2025-05-19 22:15:57.806078 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.806088 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.806097 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.806107 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.806117 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.806126 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.806136 | orchestrator | 2025-05-19 22:15:57.806145 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-19 22:15:57.806155 | orchestrator | Monday 19 May 2025 22:13:08 +0000 (0:00:02.859) 0:01:42.601 ************ 2025-05-19 22:15:57.806164 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806174 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.806184 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806193 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.806203 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806213 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.806222 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806232 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.806241 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806251 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.806260 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-19 22:15:57.806270 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.806304 | orchestrator | 2025-05-19 22:15:57.806314 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-19 22:15:57.806323 | orchestrator | Monday 19 May 2025 22:13:11 +0000 (0:00:02.673) 0:01:45.274 ************ 2025-05-19 22:15:57.806333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806343 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.806358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806379 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.806395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806412 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.806429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806445 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.806461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806486 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.806501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806516 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.806531 | orchestrator | 2025-05-19 22:15:57.806557 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-19 22:15:57.806572 | orchestrator | Monday 19 May 2025 22:13:14 +0000 (0:00:03.113) 0:01:48.388 ************ 2025-05-19 22:15:57.806594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806611 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.806637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806654 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.806670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806686 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.806701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.806717 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.806739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806765 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.806782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.806798 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.806814 | orchestrator | 2025-05-19 22:15:57.806829 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-19 22:15:57.806845 | orchestrator | Monday 19 May 2025 22:13:17 +0000 (0:00:02.848) 0:01:51.236 ************ 2025-05-19 22:15:57.806861 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.806876 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.806892 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.806908 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.806925 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.806960 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.806977 | orchestrator | 2025-05-19 22:15:57.806992 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-19 22:15:57.807008 | orchestrator | Monday 19 May 2025 22:13:20 +0000 (0:00:03.104) 0:01:54.341 ************ 2025-05-19 22:15:57.807024 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807040 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807065 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807081 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:15:57.807102 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:15:57.807122 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:15:57.807145 | orchestrator | 2025-05-19 22:15:57.807164 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-19 22:15:57.807179 | orchestrator | Monday 19 May 2025 22:13:25 +0000 (0:00:05.107) 0:01:59.448 ************ 2025-05-19 22:15:57.807194 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807210 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807225 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807241 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.807257 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.807271 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.807331 | orchestrator | 2025-05-19 22:15:57.807348 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-19 22:15:57.807363 | orchestrator | Monday 19 May 2025 22:13:30 +0000 (0:00:05.293) 0:02:04.742 ************ 2025-05-19 22:15:57.807380 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807396 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807412 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.807428 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807458 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.807474 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.807490 | orchestrator | 2025-05-19 22:15:57.807506 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-19 22:15:57.807523 | orchestrator | Monday 19 May 2025 22:13:36 +0000 (0:00:05.191) 0:02:09.933 ************ 2025-05-19 22:15:57.807539 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807555 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807572 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.807588 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807604 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.807620 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.807636 | orchestrator | 2025-05-19 22:15:57.807653 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-19 22:15:57.807670 | orchestrator | Monday 19 May 2025 22:13:39 +0000 (0:00:03.884) 0:02:13.817 ************ 2025-05-19 22:15:57.807686 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807703 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807719 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.807736 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.807752 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807768 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.807784 | orchestrator | 2025-05-19 22:15:57.807800 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-19 22:15:57.807817 | orchestrator | Monday 19 May 2025 22:13:44 +0000 (0:00:04.847) 0:02:18.665 ************ 2025-05-19 22:15:57.807833 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.807849 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.807865 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.807881 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.807897 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.807912 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.807928 | orchestrator | 2025-05-19 22:15:57.807945 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-19 22:15:57.807990 | orchestrator | Monday 19 May 2025 22:13:50 +0000 (0:00:05.573) 0:02:24.238 ************ 2025-05-19 22:15:57.808006 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.808022 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.808037 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.808052 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.808068 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.808084 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.808101 | orchestrator | 2025-05-19 22:15:57.808118 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-19 22:15:57.808143 | orchestrator | Monday 19 May 2025 22:13:53 +0000 (0:00:02.630) 0:02:26.869 ************ 2025-05-19 22:15:57.808159 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.808183 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.808201 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.808216 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.808230 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.808245 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.808260 | orchestrator | 2025-05-19 22:15:57.808344 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-19 22:15:57.808371 | orchestrator | Monday 19 May 2025 22:13:55 +0000 (0:00:02.512) 0:02:29.381 ************ 2025-05-19 22:15:57.808387 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.808404 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.808421 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.808436 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.808453 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.808469 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.808485 | orchestrator | 2025-05-19 22:15:57.808501 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-19 22:15:57.808530 | orchestrator | Monday 19 May 2025 22:13:58 +0000 (0:00:02.879) 0:02:32.261 ************ 2025-05-19 22:15:57.808546 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808563 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.808579 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808592 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.808617 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808630 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.808644 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808657 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.808671 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808684 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.808697 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-19 22:15:57.808710 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.808723 | orchestrator | 2025-05-19 22:15:57.808735 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-19 22:15:57.808747 | orchestrator | Monday 19 May 2025 22:14:02 +0000 (0:00:04.228) 0:02:36.489 ************ 2025-05-19 22:15:57.808761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.808777 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.808791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.808805 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.808826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-19 22:15:57.808849 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.808870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.808885 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.808899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.808913 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.808926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-19 22:15:57.808940 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.808954 | orchestrator | 2025-05-19 22:15:57.808967 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-19 22:15:57.808981 | orchestrator | Monday 19 May 2025 22:14:06 +0000 (0:00:03.527) 0:02:40.017 ************ 2025-05-19 22:15:57.808995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.809031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.809057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.809071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-19 22:15:57.809086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.809101 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-19 22:15:57.809122 | orchestrator | 2025-05-19 22:15:57.809136 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-19 22:15:57.809150 | orchestrator | Monday 19 May 2025 22:14:09 +0000 (0:00:03.764) 0:02:43.781 ************ 2025-05-19 22:15:57.809168 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:15:57.809182 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:15:57.809195 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:15:57.809209 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:15:57.809222 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:15:57.809236 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:15:57.809249 | orchestrator | 2025-05-19 22:15:57.809263 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-19 22:15:57.809299 | orchestrator | Monday 19 May 2025 22:14:11 +0000 (0:00:01.493) 0:02:45.275 ************ 2025-05-19 22:15:57.809312 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:15:57.809325 | orchestrator | 2025-05-19 22:15:57.809337 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-19 22:15:57.809358 | orchestrator | Monday 19 May 2025 22:14:13 +0000 (0:00:02.311) 0:02:47.587 ************ 2025-05-19 22:15:57.809371 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:15:57.809391 | orchestrator | 2025-05-19 22:15:57.809405 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-19 22:15:57.809417 | orchestrator | Monday 19 May 2025 22:14:16 +0000 (0:00:02.277) 0:02:49.864 ************ 2025-05-19 22:15:57.809429 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:15:57.809448 | orchestrator | 2025-05-19 22:15:57.809464 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809478 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:43.413) 0:03:33.278 ************ 2025-05-19 22:15:57.809491 | orchestrator | 2025-05-19 22:15:57.809505 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809518 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.060) 0:03:33.338 ************ 2025-05-19 22:15:57.809532 | orchestrator | 2025-05-19 22:15:57.809545 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809568 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.190) 0:03:33.529 ************ 2025-05-19 22:15:57.809581 | orchestrator | 2025-05-19 22:15:57.809595 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809608 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.058) 0:03:33.587 ************ 2025-05-19 22:15:57.809621 | orchestrator | 2025-05-19 22:15:57.809635 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809648 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.061) 0:03:33.649 ************ 2025-05-19 22:15:57.809662 | orchestrator | 2025-05-19 22:15:57.809675 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-19 22:15:57.809689 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.072) 0:03:33.721 ************ 2025-05-19 22:15:57.809702 | orchestrator | 2025-05-19 22:15:57.809716 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-19 22:15:57.809729 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.065) 0:03:33.786 ************ 2025-05-19 22:15:57.809742 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:15:57.809756 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:15:57.809769 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:15:57.809783 | orchestrator | 2025-05-19 22:15:57.809797 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-19 22:15:57.809810 | orchestrator | Monday 19 May 2025 22:15:32 +0000 (0:00:32.540) 0:04:06.327 ************ 2025-05-19 22:15:57.809824 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:15:57.809837 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:15:57.809861 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:15:57.809874 | orchestrator | 2025-05-19 22:15:57.809887 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:15:57.809902 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-19 22:15:57.809917 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 22:15:57.809931 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-19 22:15:57.809945 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 22:15:57.809959 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 22:15:57.809972 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-19 22:15:57.809986 | orchestrator | 2025-05-19 22:15:57.809999 | orchestrator | 2025-05-19 22:15:57.810013 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:15:57.810068 | orchestrator | Monday 19 May 2025 22:15:56 +0000 (0:00:23.775) 0:04:30.102 ************ 2025-05-19 22:15:57.810083 | orchestrator | =============================================================================== 2025-05-19 22:15:57.810098 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.41s 2025-05-19 22:15:57.810112 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.54s 2025-05-19 22:15:57.810127 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 23.78s 2025-05-19 22:15:57.810141 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.13s 2025-05-19 22:15:57.810155 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.57s 2025-05-19 22:15:57.810177 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.89s 2025-05-19 22:15:57.810191 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 5.57s 2025-05-19 22:15:57.810206 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 5.29s 2025-05-19 22:15:57.810221 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.19s 2025-05-19 22:15:57.810235 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.11s 2025-05-19 22:15:57.810250 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.85s 2025-05-19 22:15:57.810265 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.64s 2025-05-19 22:15:57.810333 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.23s 2025-05-19 22:15:57.810349 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.88s 2025-05-19 22:15:57.810363 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.81s 2025-05-19 22:15:57.810377 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.79s 2025-05-19 22:15:57.810390 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.76s 2025-05-19 22:15:57.810404 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.64s 2025-05-19 22:15:57.810416 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.53s 2025-05-19 22:15:57.810428 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.49s 2025-05-19 22:15:57.810449 | orchestrator | 2025-05-19 22:15:57 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:15:57.810473 | orchestrator | 2025-05-19 22:15:57 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state STARTED 2025-05-19 22:15:57.810488 | orchestrator | 2025-05-19 22:15:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:00.878750 | orchestrator | 2025-05-19 22:16:00 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:00.883671 | orchestrator | 2025-05-19 22:16:00 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:00.883755 | orchestrator | 2025-05-19 22:16:00 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:16:00.883770 | orchestrator | 2025-05-19 22:16:00 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:00.884002 | orchestrator | 2025-05-19 22:16:00 | INFO  | Task 5a514f82-bf10-41c1-873a-df102c9ea33b is in state SUCCESS 2025-05-19 22:16:00.885831 | orchestrator | 2025-05-19 22:16:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:00.887531 | orchestrator | 2025-05-19 22:16:00.887593 | orchestrator | 2025-05-19 22:16:00.887611 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:16:00.887623 | orchestrator | 2025-05-19 22:16:00.887634 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:16:00.887647 | orchestrator | Monday 19 May 2025 22:14:48 +0000 (0:00:00.320) 0:00:00.320 ************ 2025-05-19 22:16:00.887658 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:16:00.887675 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:16:00.887694 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:16:00.887711 | orchestrator | 2025-05-19 22:16:00.887730 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:16:00.887748 | orchestrator | Monday 19 May 2025 22:14:48 +0000 (0:00:00.450) 0:00:00.770 ************ 2025-05-19 22:16:00.887766 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-19 22:16:00.887793 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-19 22:16:00.887815 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-19 22:16:00.887833 | orchestrator | 2025-05-19 22:16:00.887853 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-19 22:16:00.887872 | orchestrator | 2025-05-19 22:16:00.887890 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 22:16:00.887908 | orchestrator | Monday 19 May 2025 22:14:49 +0000 (0:00:00.573) 0:00:01.344 ************ 2025-05-19 22:16:00.887921 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:16:00.887933 | orchestrator | 2025-05-19 22:16:00.887944 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-19 22:16:00.887955 | orchestrator | Monday 19 May 2025 22:14:50 +0000 (0:00:00.625) 0:00:01.969 ************ 2025-05-19 22:16:00.887966 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-19 22:16:00.887978 | orchestrator | 2025-05-19 22:16:00.887990 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-19 22:16:00.888001 | orchestrator | Monday 19 May 2025 22:14:53 +0000 (0:00:03.473) 0:00:05.443 ************ 2025-05-19 22:16:00.888012 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-19 22:16:00.888023 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-19 22:16:00.888034 | orchestrator | 2025-05-19 22:16:00.888047 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-19 22:16:00.888066 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:06.352) 0:00:11.796 ************ 2025-05-19 22:16:00.888085 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:16:00.888103 | orchestrator | 2025-05-19 22:16:00.888144 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-19 22:16:00.888186 | orchestrator | Monday 19 May 2025 22:15:03 +0000 (0:00:03.262) 0:00:15.058 ************ 2025-05-19 22:16:00.888199 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:16:00.888212 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-19 22:16:00.888224 | orchestrator | 2025-05-19 22:16:00.888236 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-19 22:16:00.888250 | orchestrator | Monday 19 May 2025 22:15:06 +0000 (0:00:03.749) 0:00:18.807 ************ 2025-05-19 22:16:00.888298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:16:00.888315 | orchestrator | 2025-05-19 22:16:00.888329 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-19 22:16:00.888341 | orchestrator | Monday 19 May 2025 22:15:10 +0000 (0:00:03.199) 0:00:22.007 ************ 2025-05-19 22:16:00.888354 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-19 22:16:00.888367 | orchestrator | 2025-05-19 22:16:00.888380 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 22:16:00.888392 | orchestrator | Monday 19 May 2025 22:15:13 +0000 (0:00:03.726) 0:00:25.733 ************ 2025-05-19 22:16:00.888404 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.888416 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:00.888428 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:00.888441 | orchestrator | 2025-05-19 22:16:00.888453 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-19 22:16:00.888464 | orchestrator | Monday 19 May 2025 22:15:14 +0000 (0:00:00.275) 0:00:26.009 ************ 2025-05-19 22:16:00.888481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888589 | orchestrator | 2025-05-19 22:16:00.888602 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-19 22:16:00.888614 | orchestrator | Monday 19 May 2025 22:15:15 +0000 (0:00:01.062) 0:00:27.072 ************ 2025-05-19 22:16:00.888624 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.888635 | orchestrator | 2025-05-19 22:16:00.888653 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-19 22:16:00.888664 | orchestrator | Monday 19 May 2025 22:15:15 +0000 (0:00:00.141) 0:00:27.213 ************ 2025-05-19 22:16:00.888675 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.888686 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:00.888697 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:00.888707 | orchestrator | 2025-05-19 22:16:00.888718 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-19 22:16:00.888729 | orchestrator | Monday 19 May 2025 22:15:15 +0000 (0:00:00.491) 0:00:27.705 ************ 2025-05-19 22:16:00.888739 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:16:00.888750 | orchestrator | 2025-05-19 22:16:00.888761 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-19 22:16:00.888772 | orchestrator | Monday 19 May 2025 22:15:16 +0000 (0:00:00.502) 0:00:28.207 ************ 2025-05-19 22:16:00.888783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.888838 | orchestrator | 2025-05-19 22:16:00.888849 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-19 22:16:00.888860 | orchestrator | Monday 19 May 2025 22:15:17 +0000 (0:00:01.383) 0:00:29.591 ************ 2025-05-19 22:16:00.888876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.888888 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.888899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.888910 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:00.888928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.888941 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:00.888952 | orchestrator | 2025-05-19 22:16:00.888962 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-19 22:16:00.888973 | orchestrator | Monday 19 May 2025 22:15:18 +0000 (0:00:00.685) 0:00:30.277 ************ 2025-05-19 22:16:00.888984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889002 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.889019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889031 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:00.889042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889053 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:00.889064 | orchestrator | 2025-05-19 22:16:00.889075 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-19 22:16:00.889086 | orchestrator | Monday 19 May 2025 22:15:19 +0000 (0:00:00.700) 0:00:30.977 ************ 2025-05-19 22:16:00.889103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889147 | orchestrator | 2025-05-19 22:16:00.889158 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-19 22:16:00.889174 | orchestrator | Monday 19 May 2025 22:15:20 +0000 (0:00:01.257) 0:00:32.234 ************ 2025-05-19 22:16:00.889185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889234 | orchestrator | 2025-05-19 22:16:00.889246 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-19 22:16:00.889257 | orchestrator | Monday 19 May 2025 22:15:23 +0000 (0:00:03.277) 0:00:35.512 ************ 2025-05-19 22:16:00.889393 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 22:16:00.889411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 22:16:00.889423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-19 22:16:00.889433 | orchestrator | 2025-05-19 22:16:00.889444 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-19 22:16:00.889455 | orchestrator | Monday 19 May 2025 22:15:25 +0000 (0:00:01.660) 0:00:37.172 ************ 2025-05-19 22:16:00.889466 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:00.889475 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:00.889485 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:00.889494 | orchestrator | 2025-05-19 22:16:00.889504 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-19 22:16:00.889513 | orchestrator | Monday 19 May 2025 22:15:26 +0000 (0:00:01.515) 0:00:38.687 ************ 2025-05-19 22:16:00.889529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889540 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:00.889549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889567 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:00.889585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-19 22:16:00.889596 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:00.889606 | orchestrator | 2025-05-19 22:16:00.889616 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-19 22:16:00.889625 | orchestrator | Monday 19 May 2025 22:15:27 +0000 (0:00:00.566) 0:00:39.254 ************ 2025-05-19 22:16:00.889635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-19 22:16:00.889682 | orchestrator | 2025-05-19 22:16:00.889692 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-19 22:16:00.889702 | orchestrator | Monday 19 May 2025 22:15:29 +0000 (0:00:01.681) 0:00:40.935 ************ 2025-05-19 22:16:00.889711 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:00.889721 | orchestrator | 2025-05-19 22:16:00.889730 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-19 22:16:00.889740 | orchestrator | Monday 19 May 2025 22:15:31 +0000 (0:00:02.213) 0:00:43.148 ************ 2025-05-19 22:16:00.889749 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:00.889759 | orchestrator | 2025-05-19 22:16:00.889769 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-19 22:16:00.889779 | orchestrator | Monday 19 May 2025 22:15:33 +0000 (0:00:02.288) 0:00:45.437 ************ 2025-05-19 22:16:00.889788 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:00.889798 | orchestrator | 2025-05-19 22:16:00.889807 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 22:16:00.889817 | orchestrator | Monday 19 May 2025 22:15:47 +0000 (0:00:13.563) 0:00:59.000 ************ 2025-05-19 22:16:00.889827 | orchestrator | 2025-05-19 22:16:00.889836 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 22:16:00.889846 | orchestrator | Monday 19 May 2025 22:15:47 +0000 (0:00:00.078) 0:00:59.079 ************ 2025-05-19 22:16:00.889855 | orchestrator | 2025-05-19 22:16:00.889870 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-19 22:16:00.889880 | orchestrator | Monday 19 May 2025 22:15:47 +0000 (0:00:00.063) 0:00:59.142 ************ 2025-05-19 22:16:00.889890 | orchestrator | 2025-05-19 22:16:00.889899 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-19 22:16:00.889909 | orchestrator | Monday 19 May 2025 22:15:47 +0000 (0:00:00.068) 0:00:59.211 ************ 2025-05-19 22:16:00.889919 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:00.889928 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:00.889959 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:00.889969 | orchestrator | 2025-05-19 22:16:00.889978 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:16:00.889989 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:16:00.890000 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:16:00.890010 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:16:00.890074 | orchestrator | 2025-05-19 22:16:00.890087 | orchestrator | 2025-05-19 22:16:00.890105 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:16:00.890121 | orchestrator | Monday 19 May 2025 22:15:58 +0000 (0:00:11.284) 0:01:10.495 ************ 2025-05-19 22:16:00.890131 | orchestrator | =============================================================================== 2025-05-19 22:16:00.890141 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.56s 2025-05-19 22:16:00.890150 | orchestrator | placement : Restart placement-api container ---------------------------- 11.28s 2025-05-19 22:16:00.890160 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.35s 2025-05-19 22:16:00.890169 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.75s 2025-05-19 22:16:00.890179 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.73s 2025-05-19 22:16:00.890189 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.47s 2025-05-19 22:16:00.890198 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.28s 2025-05-19 22:16:00.890208 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.26s 2025-05-19 22:16:00.890226 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-05-19 22:16:00.890241 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.29s 2025-05-19 22:16:00.890251 | orchestrator | placement : Creating placement databases -------------------------------- 2.21s 2025-05-19 22:16:00.890261 | orchestrator | placement : Check placement containers ---------------------------------- 1.68s 2025-05-19 22:16:00.890292 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.66s 2025-05-19 22:16:00.890303 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.52s 2025-05-19 22:16:00.890313 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.38s 2025-05-19 22:16:00.890322 | orchestrator | placement : Copying over config.json files for services ----------------- 1.26s 2025-05-19 22:16:00.890332 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.06s 2025-05-19 22:16:00.890342 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.70s 2025-05-19 22:16:00.890351 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.69s 2025-05-19 22:16:00.890361 | orchestrator | placement : include_tasks ----------------------------------------------- 0.63s 2025-05-19 22:16:03.975551 | orchestrator | 2025-05-19 22:16:03 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:03.977804 | orchestrator | 2025-05-19 22:16:03 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:03.978409 | orchestrator | 2025-05-19 22:16:03 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:16:03.979924 | orchestrator | 2025-05-19 22:16:03 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:03.979947 | orchestrator | 2025-05-19 22:16:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:07.036362 | orchestrator | 2025-05-19 22:16:07 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:07.038800 | orchestrator | 2025-05-19 22:16:07 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:07.038866 | orchestrator | 2025-05-19 22:16:07 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state STARTED 2025-05-19 22:16:07.039340 | orchestrator | 2025-05-19 22:16:07 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:07.039369 | orchestrator | 2025-05-19 22:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:10.106375 | orchestrator | 2025-05-19 22:16:10 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:10.108652 | orchestrator | 2025-05-19 22:16:10 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:10.113099 | orchestrator | 2025-05-19 22:16:10 | INFO  | Task 95b7dc29-d224-4db6-b2ae-03a86ccf8e55 is in state SUCCESS 2025-05-19 22:16:10.115637 | orchestrator | 2025-05-19 22:16:10.115679 | orchestrator | 2025-05-19 22:16:10.115692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:16:10.115705 | orchestrator | 2025-05-19 22:16:10.115716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:16:10.115728 | orchestrator | Monday 19 May 2025 22:12:57 +0000 (0:00:00.426) 0:00:00.426 ************ 2025-05-19 22:16:10.115740 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:16:10.115752 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:16:10.115763 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:16:10.115774 | orchestrator | 2025-05-19 22:16:10.115786 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:16:10.115797 | orchestrator | Monday 19 May 2025 22:12:57 +0000 (0:00:00.432) 0:00:00.859 ************ 2025-05-19 22:16:10.115833 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-19 22:16:10.115845 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-19 22:16:10.115856 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-19 22:16:10.115867 | orchestrator | 2025-05-19 22:16:10.115878 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-19 22:16:10.115889 | orchestrator | 2025-05-19 22:16:10.115900 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 22:16:10.115911 | orchestrator | Monday 19 May 2025 22:12:58 +0000 (0:00:00.858) 0:00:01.717 ************ 2025-05-19 22:16:10.115922 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:16:10.115934 | orchestrator | 2025-05-19 22:16:10.115946 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-19 22:16:10.115957 | orchestrator | Monday 19 May 2025 22:12:59 +0000 (0:00:01.508) 0:00:03.225 ************ 2025-05-19 22:16:10.115968 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-19 22:16:10.115979 | orchestrator | 2025-05-19 22:16:10.115990 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-19 22:16:10.116001 | orchestrator | Monday 19 May 2025 22:13:03 +0000 (0:00:03.392) 0:00:06.618 ************ 2025-05-19 22:16:10.116012 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-19 22:16:10.116023 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-19 22:16:10.116034 | orchestrator | 2025-05-19 22:16:10.116060 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-19 22:16:10.116071 | orchestrator | Monday 19 May 2025 22:13:09 +0000 (0:00:06.287) 0:00:12.906 ************ 2025-05-19 22:16:10.116082 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:16:10.116094 | orchestrator | 2025-05-19 22:16:10.116104 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-19 22:16:10.116115 | orchestrator | Monday 19 May 2025 22:13:12 +0000 (0:00:02.869) 0:00:15.775 ************ 2025-05-19 22:16:10.116126 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:16:10.116137 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-19 22:16:10.116149 | orchestrator | 2025-05-19 22:16:10.116160 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-19 22:16:10.116171 | orchestrator | Monday 19 May 2025 22:13:16 +0000 (0:00:04.080) 0:00:19.856 ************ 2025-05-19 22:16:10.116182 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:16:10.116193 | orchestrator | 2025-05-19 22:16:10.116204 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-19 22:16:10.116214 | orchestrator | Monday 19 May 2025 22:13:20 +0000 (0:00:03.592) 0:00:23.449 ************ 2025-05-19 22:16:10.116227 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-19 22:16:10.116239 | orchestrator | 2025-05-19 22:16:10.116279 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-19 22:16:10.116293 | orchestrator | Monday 19 May 2025 22:13:24 +0000 (0:00:04.368) 0:00:27.818 ************ 2025-05-19 22:16:10.116309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116646 | orchestrator | 2025-05-19 22:16:10.116657 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-19 22:16:10.116669 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:03.697) 0:00:31.516 ************ 2025-05-19 22:16:10.116680 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.116692 | orchestrator | 2025-05-19 22:16:10.116703 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-19 22:16:10.116714 | orchestrator | Monday 19 May 2025 22:13:28 +0000 (0:00:00.570) 0:00:32.086 ************ 2025-05-19 22:16:10.116725 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.116736 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.116747 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.116758 | orchestrator | 2025-05-19 22:16:10.116769 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 22:16:10.116787 | orchestrator | Monday 19 May 2025 22:13:30 +0000 (0:00:01.219) 0:00:33.306 ************ 2025-05-19 22:16:10.116798 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:16:10.116809 | orchestrator | 2025-05-19 22:16:10.116820 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-19 22:16:10.116831 | orchestrator | Monday 19 May 2025 22:13:32 +0000 (0:00:02.001) 0:00:35.307 ************ 2025-05-19 22:16:10.116843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.116890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.116988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117122 | orchestrator | 2025-05-19 22:16:10.117133 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-19 22:16:10.117145 | orchestrator | Monday 19 May 2025 22:13:40 +0000 (0:00:08.745) 0:00:44.053 ************ 2025-05-19 22:16:10.117156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117245 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.117288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117405 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.117417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117507 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.117518 | orchestrator | 2025-05-19 22:16:10.117529 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-19 22:16:10.117556 | orchestrator | Monday 19 May 2025 22:13:44 +0000 (0:00:03.562) 0:00:47.616 ************ 2025-05-19 22:16:10.117568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117741 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.117752 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.117763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.117775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.117786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.117846 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.117857 | orchestrator | 2025-05-19 22:16:10.117873 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-19 22:16:10.117885 | orchestrator | Monday 19 May 2025 22:13:47 +0000 (0:00:02.772) 0:00:50.388 ************ 2025-05-19 22:16:10.117896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.117908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.117926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.117938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.117996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118212 | orchestrator | 2025-05-19 22:16:10.118224 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-19 22:16:10.118235 | orchestrator | Monday 19 May 2025 22:13:55 +0000 (0:00:07.867) 0:00:58.256 ************ 2025-05-19 22:16:10.118276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.118289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.118301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.118327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118539 | orchestrator | 2025-05-19 22:16:10.118550 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-19 22:16:10.118561 | orchestrator | Monday 19 May 2025 22:14:15 +0000 (0:00:20.402) 0:01:18.659 ************ 2025-05-19 22:16:10.118572 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 22:16:10.118583 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 22:16:10.118594 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-19 22:16:10.118605 | orchestrator | 2025-05-19 22:16:10.118616 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-19 22:16:10.118627 | orchestrator | Monday 19 May 2025 22:14:20 +0000 (0:00:05.591) 0:01:24.251 ************ 2025-05-19 22:16:10.118638 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 22:16:10.118648 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 22:16:10.118659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-19 22:16:10.118670 | orchestrator | 2025-05-19 22:16:10.118681 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-19 22:16:10.118696 | orchestrator | Monday 19 May 2025 22:14:23 +0000 (0:00:02.938) 0:01:27.189 ************ 2025-05-19 22:16:10.118708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.118720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.118745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.118757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.118924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.118964 | orchestrator | 2025-05-19 22:16:10.118976 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-19 22:16:10.118986 | orchestrator | Monday 19 May 2025 22:14:27 +0000 (0:00:03.511) 0:01:30.700 ************ 2025-05-19 22:16:10.119005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119285 | orchestrator | 2025-05-19 22:16:10.119296 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 22:16:10.119307 | orchestrator | Monday 19 May 2025 22:14:30 +0000 (0:00:02.834) 0:01:33.535 ************ 2025-05-19 22:16:10.119318 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.119329 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.119340 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.119351 | orchestrator | 2025-05-19 22:16:10.119362 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-19 22:16:10.119373 | orchestrator | Monday 19 May 2025 22:14:31 +0000 (0:00:00.783) 0:01:34.318 ************ 2025-05-19 22:16:10.119389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.119419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119473 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.119489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.119519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119571 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.119583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-19 22:16:10.119605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-19 22:16:10.119617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:16:10.119669 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.119680 | orchestrator | 2025-05-19 22:16:10.119691 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-19 22:16:10.119702 | orchestrator | Monday 19 May 2025 22:14:32 +0000 (0:00:01.809) 0:01:36.128 ************ 2025-05-19 22:16:10.119713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.119736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.119748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-19 22:16:10.119759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:16:10.119975 | orchestrator | 2025-05-19 22:16:10.119986 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-19 22:16:10.119997 | orchestrator | Monday 19 May 2025 22:14:37 +0000 (0:00:05.031) 0:01:41.160 ************ 2025-05-19 22:16:10.120008 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:16:10.120026 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:16:10.120037 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:16:10.120047 | orchestrator | 2025-05-19 22:16:10.120058 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-19 22:16:10.120069 | orchestrator | Monday 19 May 2025 22:14:38 +0000 (0:00:00.382) 0:01:41.543 ************ 2025-05-19 22:16:10.120080 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-19 22:16:10.120091 | orchestrator | 2025-05-19 22:16:10.120102 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-19 22:16:10.120113 | orchestrator | Monday 19 May 2025 22:14:40 +0000 (0:00:02.251) 0:01:43.794 ************ 2025-05-19 22:16:10.120124 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:16:10.120135 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-19 22:16:10.120145 | orchestrator | 2025-05-19 22:16:10.120156 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-19 22:16:10.120167 | orchestrator | Monday 19 May 2025 22:14:42 +0000 (0:00:02.148) 0:01:45.942 ************ 2025-05-19 22:16:10.120178 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120189 | orchestrator | 2025-05-19 22:16:10.120199 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 22:16:10.120210 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:16.352) 0:02:02.295 ************ 2025-05-19 22:16:10.120221 | orchestrator | 2025-05-19 22:16:10.120232 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 22:16:10.120243 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.058) 0:02:02.353 ************ 2025-05-19 22:16:10.120289 | orchestrator | 2025-05-19 22:16:10.120302 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-19 22:16:10.120312 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.057) 0:02:02.411 ************ 2025-05-19 22:16:10.120323 | orchestrator | 2025-05-19 22:16:10.120334 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-19 22:16:10.120350 | orchestrator | Monday 19 May 2025 22:14:59 +0000 (0:00:00.061) 0:02:02.472 ************ 2025-05-19 22:16:10.120362 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120373 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120383 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120394 | orchestrator | 2025-05-19 22:16:10.120405 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-19 22:16:10.120416 | orchestrator | Monday 19 May 2025 22:15:13 +0000 (0:00:13.817) 0:02:16.290 ************ 2025-05-19 22:16:10.120427 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120438 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120449 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120459 | orchestrator | 2025-05-19 22:16:10.120470 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-19 22:16:10.120481 | orchestrator | Monday 19 May 2025 22:15:19 +0000 (0:00:06.697) 0:02:22.988 ************ 2025-05-19 22:16:10.120492 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120503 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120514 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120525 | orchestrator | 2025-05-19 22:16:10.120536 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-19 22:16:10.120547 | orchestrator | Monday 19 May 2025 22:15:32 +0000 (0:00:12.488) 0:02:35.477 ************ 2025-05-19 22:16:10.120557 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120568 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120579 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120590 | orchestrator | 2025-05-19 22:16:10.120600 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-19 22:16:10.120612 | orchestrator | Monday 19 May 2025 22:15:39 +0000 (0:00:06.952) 0:02:42.429 ************ 2025-05-19 22:16:10.120622 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120640 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120651 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120662 | orchestrator | 2025-05-19 22:16:10.120673 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-19 22:16:10.120684 | orchestrator | Monday 19 May 2025 22:15:49 +0000 (0:00:10.749) 0:02:53.179 ************ 2025-05-19 22:16:10.120694 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120705 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:16:10.120716 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:16:10.120727 | orchestrator | 2025-05-19 22:16:10.120738 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-19 22:16:10.120749 | orchestrator | Monday 19 May 2025 22:16:01 +0000 (0:00:11.522) 0:03:04.702 ************ 2025-05-19 22:16:10.120760 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:16:10.120770 | orchestrator | 2025-05-19 22:16:10.120781 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:16:10.120792 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:16:10.120804 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:16:10.120815 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:16:10.120826 | orchestrator | 2025-05-19 22:16:10.120837 | orchestrator | 2025-05-19 22:16:10.120854 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:16:10.120865 | orchestrator | Monday 19 May 2025 22:16:08 +0000 (0:00:07.357) 0:03:12.059 ************ 2025-05-19 22:16:10.120876 | orchestrator | =============================================================================== 2025-05-19 22:16:10.120887 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.40s 2025-05-19 22:16:10.120897 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.35s 2025-05-19 22:16:10.120908 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.82s 2025-05-19 22:16:10.120919 | orchestrator | designate : Restart designate-central container ------------------------ 12.49s 2025-05-19 22:16:10.120929 | orchestrator | designate : Restart designate-worker container ------------------------- 11.52s 2025-05-19 22:16:10.120940 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.75s 2025-05-19 22:16:10.120950 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.75s 2025-05-19 22:16:10.120961 | orchestrator | designate : Copying over config.json files for services ----------------- 7.87s 2025-05-19 22:16:10.120972 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.36s 2025-05-19 22:16:10.120983 | orchestrator | designate : Restart designate-producer container ------------------------ 6.95s 2025-05-19 22:16:10.120994 | orchestrator | designate : Restart designate-api container ----------------------------- 6.70s 2025-05-19 22:16:10.121004 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.29s 2025-05-19 22:16:10.121015 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.59s 2025-05-19 22:16:10.121026 | orchestrator | designate : Check designate containers ---------------------------------- 5.03s 2025-05-19 22:16:10.121036 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.37s 2025-05-19 22:16:10.121047 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.08s 2025-05-19 22:16:10.121058 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.70s 2025-05-19 22:16:10.121068 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.59s 2025-05-19 22:16:10.121079 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 3.56s 2025-05-19 22:16:10.121102 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.51s 2025-05-19 22:16:10.121113 | orchestrator | 2025-05-19 22:16:10 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:10.121124 | orchestrator | 2025-05-19 22:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:13.167808 | orchestrator | 2025-05-19 22:16:13 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:13.168738 | orchestrator | 2025-05-19 22:16:13 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:13.169921 | orchestrator | 2025-05-19 22:16:13 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:13.171106 | orchestrator | 2025-05-19 22:16:13 | INFO  | Task 00d4b439-3d84-45aa-abcc-80f81f8151f1 is in state STARTED 2025-05-19 22:16:13.171406 | orchestrator | 2025-05-19 22:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:16.229603 | orchestrator | 2025-05-19 22:16:16 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:16.233125 | orchestrator | 2025-05-19 22:16:16 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:16.234582 | orchestrator | 2025-05-19 22:16:16 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:16.236336 | orchestrator | 2025-05-19 22:16:16 | INFO  | Task 00d4b439-3d84-45aa-abcc-80f81f8151f1 is in state SUCCESS 2025-05-19 22:16:16.236853 | orchestrator | 2025-05-19 22:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:19.295909 | orchestrator | 2025-05-19 22:16:19 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:19.298510 | orchestrator | 2025-05-19 22:16:19 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:19.305499 | orchestrator | 2025-05-19 22:16:19 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:19.308113 | orchestrator | 2025-05-19 22:16:19 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:19.308183 | orchestrator | 2025-05-19 22:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:22.357375 | orchestrator | 2025-05-19 22:16:22 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:22.358872 | orchestrator | 2025-05-19 22:16:22 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:22.360618 | orchestrator | 2025-05-19 22:16:22 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:22.362273 | orchestrator | 2025-05-19 22:16:22 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:22.362315 | orchestrator | 2025-05-19 22:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:25.410441 | orchestrator | 2025-05-19 22:16:25 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:25.410567 | orchestrator | 2025-05-19 22:16:25 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:25.410589 | orchestrator | 2025-05-19 22:16:25 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:25.411423 | orchestrator | 2025-05-19 22:16:25 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:25.411461 | orchestrator | 2025-05-19 22:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:28.470106 | orchestrator | 2025-05-19 22:16:28 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:28.470516 | orchestrator | 2025-05-19 22:16:28 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:28.471899 | orchestrator | 2025-05-19 22:16:28 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:28.473306 | orchestrator | 2025-05-19 22:16:28 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:28.473710 | orchestrator | 2025-05-19 22:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:31.521945 | orchestrator | 2025-05-19 22:16:31 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:31.522112 | orchestrator | 2025-05-19 22:16:31 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:31.522919 | orchestrator | 2025-05-19 22:16:31 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:31.524063 | orchestrator | 2025-05-19 22:16:31 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:31.524468 | orchestrator | 2025-05-19 22:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:34.593355 | orchestrator | 2025-05-19 22:16:34 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:34.595353 | orchestrator | 2025-05-19 22:16:34 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:34.596157 | orchestrator | 2025-05-19 22:16:34 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:34.597151 | orchestrator | 2025-05-19 22:16:34 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:34.597289 | orchestrator | 2025-05-19 22:16:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:37.646785 | orchestrator | 2025-05-19 22:16:37 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:37.648433 | orchestrator | 2025-05-19 22:16:37 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:37.649918 | orchestrator | 2025-05-19 22:16:37 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:37.651395 | orchestrator | 2025-05-19 22:16:37 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:37.651467 | orchestrator | 2025-05-19 22:16:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:40.694543 | orchestrator | 2025-05-19 22:16:40 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:40.696644 | orchestrator | 2025-05-19 22:16:40 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:40.698698 | orchestrator | 2025-05-19 22:16:40 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:40.700283 | orchestrator | 2025-05-19 22:16:40 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:40.700331 | orchestrator | 2025-05-19 22:16:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:43.748493 | orchestrator | 2025-05-19 22:16:43 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:43.749402 | orchestrator | 2025-05-19 22:16:43 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:43.750957 | orchestrator | 2025-05-19 22:16:43 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:43.752567 | orchestrator | 2025-05-19 22:16:43 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:43.753296 | orchestrator | 2025-05-19 22:16:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:46.803978 | orchestrator | 2025-05-19 22:16:46 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:46.805324 | orchestrator | 2025-05-19 22:16:46 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:46.807100 | orchestrator | 2025-05-19 22:16:46 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:46.808441 | orchestrator | 2025-05-19 22:16:46 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:46.808474 | orchestrator | 2025-05-19 22:16:46 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:49.847355 | orchestrator | 2025-05-19 22:16:49 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:49.848501 | orchestrator | 2025-05-19 22:16:49 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:49.848541 | orchestrator | 2025-05-19 22:16:49 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:49.849250 | orchestrator | 2025-05-19 22:16:49 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:49.849274 | orchestrator | 2025-05-19 22:16:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:52.886007 | orchestrator | 2025-05-19 22:16:52 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:52.886232 | orchestrator | 2025-05-19 22:16:52 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:52.887356 | orchestrator | 2025-05-19 22:16:52 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:52.887979 | orchestrator | 2025-05-19 22:16:52 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:52.888008 | orchestrator | 2025-05-19 22:16:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:55.930465 | orchestrator | 2025-05-19 22:16:55 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:55.930578 | orchestrator | 2025-05-19 22:16:55 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:55.931465 | orchestrator | 2025-05-19 22:16:55 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:55.932375 | orchestrator | 2025-05-19 22:16:55 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:55.932399 | orchestrator | 2025-05-19 22:16:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:16:58.982704 | orchestrator | 2025-05-19 22:16:58 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:16:58.984898 | orchestrator | 2025-05-19 22:16:58 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:16:58.994115 | orchestrator | 2025-05-19 22:16:58 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:16:58.997239 | orchestrator | 2025-05-19 22:16:58 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:16:58.997276 | orchestrator | 2025-05-19 22:16:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:02.042344 | orchestrator | 2025-05-19 22:17:02 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:02.044385 | orchestrator | 2025-05-19 22:17:02 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:02.044507 | orchestrator | 2025-05-19 22:17:02 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:02.045422 | orchestrator | 2025-05-19 22:17:02 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:02.045450 | orchestrator | 2025-05-19 22:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:05.135924 | orchestrator | 2025-05-19 22:17:05 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:05.136326 | orchestrator | 2025-05-19 22:17:05 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:05.137587 | orchestrator | 2025-05-19 22:17:05 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:05.138179 | orchestrator | 2025-05-19 22:17:05 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:05.138206 | orchestrator | 2025-05-19 22:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:08.182751 | orchestrator | 2025-05-19 22:17:08 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:08.184447 | orchestrator | 2025-05-19 22:17:08 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:08.188007 | orchestrator | 2025-05-19 22:17:08 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:08.189963 | orchestrator | 2025-05-19 22:17:08 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:08.189989 | orchestrator | 2025-05-19 22:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:11.243454 | orchestrator | 2025-05-19 22:17:11 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:11.246396 | orchestrator | 2025-05-19 22:17:11 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:11.248795 | orchestrator | 2025-05-19 22:17:11 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:11.251126 | orchestrator | 2025-05-19 22:17:11 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:11.251233 | orchestrator | 2025-05-19 22:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:14.309926 | orchestrator | 2025-05-19 22:17:14 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:14.311320 | orchestrator | 2025-05-19 22:17:14 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:14.313036 | orchestrator | 2025-05-19 22:17:14 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:14.314737 | orchestrator | 2025-05-19 22:17:14 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:14.314997 | orchestrator | 2025-05-19 22:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:17.361254 | orchestrator | 2025-05-19 22:17:17 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:17.362730 | orchestrator | 2025-05-19 22:17:17 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:17.366497 | orchestrator | 2025-05-19 22:17:17 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:17.368100 | orchestrator | 2025-05-19 22:17:17 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:17.368140 | orchestrator | 2025-05-19 22:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:20.424244 | orchestrator | 2025-05-19 22:17:20 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:20.425599 | orchestrator | 2025-05-19 22:17:20 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:20.427078 | orchestrator | 2025-05-19 22:17:20 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:20.430212 | orchestrator | 2025-05-19 22:17:20 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:20.430263 | orchestrator | 2025-05-19 22:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:23.477149 | orchestrator | 2025-05-19 22:17:23 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:23.479966 | orchestrator | 2025-05-19 22:17:23 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:23.483228 | orchestrator | 2025-05-19 22:17:23 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:23.486495 | orchestrator | 2025-05-19 22:17:23 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:23.486521 | orchestrator | 2025-05-19 22:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:26.529837 | orchestrator | 2025-05-19 22:17:26 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:26.530378 | orchestrator | 2025-05-19 22:17:26 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:26.531298 | orchestrator | 2025-05-19 22:17:26 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:26.531395 | orchestrator | 2025-05-19 22:17:26 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:26.531422 | orchestrator | 2025-05-19 22:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:29.575627 | orchestrator | 2025-05-19 22:17:29 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:29.576388 | orchestrator | 2025-05-19 22:17:29 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:29.576532 | orchestrator | 2025-05-19 22:17:29 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:29.576618 | orchestrator | 2025-05-19 22:17:29 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:29.576633 | orchestrator | 2025-05-19 22:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:32.604767 | orchestrator | 2025-05-19 22:17:32 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:32.605783 | orchestrator | 2025-05-19 22:17:32 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:32.606228 | orchestrator | 2025-05-19 22:17:32 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:32.607624 | orchestrator | 2025-05-19 22:17:32 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:32.607649 | orchestrator | 2025-05-19 22:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:35.658896 | orchestrator | 2025-05-19 22:17:35 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:35.659755 | orchestrator | 2025-05-19 22:17:35 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:35.662298 | orchestrator | 2025-05-19 22:17:35 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:35.662352 | orchestrator | 2025-05-19 22:17:35 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:35.662382 | orchestrator | 2025-05-19 22:17:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:38.695785 | orchestrator | 2025-05-19 22:17:38 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:38.697412 | orchestrator | 2025-05-19 22:17:38 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:38.698632 | orchestrator | 2025-05-19 22:17:38 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:38.699484 | orchestrator | 2025-05-19 22:17:38 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:38.699510 | orchestrator | 2025-05-19 22:17:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:41.773462 | orchestrator | 2025-05-19 22:17:41 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:41.784166 | orchestrator | 2025-05-19 22:17:41 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:41.785223 | orchestrator | 2025-05-19 22:17:41 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:41.787396 | orchestrator | 2025-05-19 22:17:41 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:41.787430 | orchestrator | 2025-05-19 22:17:41 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:44.850898 | orchestrator | 2025-05-19 22:17:44 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:44.855210 | orchestrator | 2025-05-19 22:17:44 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:44.855862 | orchestrator | 2025-05-19 22:17:44 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:44.857785 | orchestrator | 2025-05-19 22:17:44 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:44.858132 | orchestrator | 2025-05-19 22:17:44 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:47.916466 | orchestrator | 2025-05-19 22:17:47 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:47.916554 | orchestrator | 2025-05-19 22:17:47 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:47.916562 | orchestrator | 2025-05-19 22:17:47 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:47.916568 | orchestrator | 2025-05-19 22:17:47 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:47.916574 | orchestrator | 2025-05-19 22:17:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:50.981411 | orchestrator | 2025-05-19 22:17:50 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:50.983828 | orchestrator | 2025-05-19 22:17:50 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:50.984679 | orchestrator | 2025-05-19 22:17:50 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:50.986664 | orchestrator | 2025-05-19 22:17:50 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:50.986692 | orchestrator | 2025-05-19 22:17:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:54.059793 | orchestrator | 2025-05-19 22:17:54 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:54.060958 | orchestrator | 2025-05-19 22:17:54 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state STARTED 2025-05-19 22:17:54.065090 | orchestrator | 2025-05-19 22:17:54 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:54.066715 | orchestrator | 2025-05-19 22:17:54 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:54.067166 | orchestrator | 2025-05-19 22:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:17:57.116885 | orchestrator | 2025-05-19 22:17:57 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:17:57.119930 | orchestrator | 2025-05-19 22:17:57 | INFO  | Task b41de287-40bb-4ac7-bd30-88b908af7ee5 is in state SUCCESS 2025-05-19 22:17:57.121742 | orchestrator | 2025-05-19 22:17:57.121774 | orchestrator | 2025-05-19 22:17:57.121786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:17:57.121798 | orchestrator | 2025-05-19 22:17:57.121809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:17:57.121821 | orchestrator | Monday 19 May 2025 22:16:13 +0000 (0:00:00.210) 0:00:00.210 ************ 2025-05-19 22:17:57.121832 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:17:57.121844 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:17:57.121855 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:17:57.121865 | orchestrator | 2025-05-19 22:17:57.121890 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:17:57.121902 | orchestrator | Monday 19 May 2025 22:16:13 +0000 (0:00:00.321) 0:00:00.532 ************ 2025-05-19 22:17:57.121913 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 22:17:57.121924 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 22:17:57.121934 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 22:17:57.121945 | orchestrator | 2025-05-19 22:17:57.121956 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-19 22:17:57.122113 | orchestrator | 2025-05-19 22:17:57.122128 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-19 22:17:57.122139 | orchestrator | Monday 19 May 2025 22:16:14 +0000 (0:00:00.722) 0:00:01.254 ************ 2025-05-19 22:17:57.122149 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:17:57.122160 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:17:57.122171 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:17:57.122182 | orchestrator | 2025-05-19 22:17:57.122192 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:17:57.122204 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:17:57.122216 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:17:57.122227 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:17:57.122237 | orchestrator | 2025-05-19 22:17:57.122248 | orchestrator | 2025-05-19 22:17:57.122259 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:17:57.122270 | orchestrator | Monday 19 May 2025 22:16:14 +0000 (0:00:00.792) 0:00:02.046 ************ 2025-05-19 22:17:57.122281 | orchestrator | =============================================================================== 2025-05-19 22:17:57.122291 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.79s 2025-05-19 22:17:57.122302 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-05-19 22:17:57.122313 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-05-19 22:17:57.122324 | orchestrator | 2025-05-19 22:17:57.122334 | orchestrator | 2025-05-19 22:17:57.122345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:17:57.122356 | orchestrator | 2025-05-19 22:17:57.122368 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:17:57.122381 | orchestrator | Monday 19 May 2025 22:16:02 +0000 (0:00:00.320) 0:00:00.321 ************ 2025-05-19 22:17:57.122427 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:17:57.122440 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:17:57.122452 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:17:57.122464 | orchestrator | 2025-05-19 22:17:57.122476 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:17:57.122489 | orchestrator | Monday 19 May 2025 22:16:02 +0000 (0:00:00.386) 0:00:00.707 ************ 2025-05-19 22:17:57.122501 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-19 22:17:57.122514 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-19 22:17:57.122524 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-19 22:17:57.122535 | orchestrator | 2025-05-19 22:17:57.122545 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-19 22:17:57.122556 | orchestrator | 2025-05-19 22:17:57.122567 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 22:17:57.122577 | orchestrator | Monday 19 May 2025 22:16:03 +0000 (0:00:00.593) 0:00:01.300 ************ 2025-05-19 22:17:57.122588 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:17:57.122600 | orchestrator | 2025-05-19 22:17:57.122610 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-19 22:17:57.122621 | orchestrator | Monday 19 May 2025 22:16:03 +0000 (0:00:00.585) 0:00:01.886 ************ 2025-05-19 22:17:57.122632 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-19 22:17:57.122643 | orchestrator | 2025-05-19 22:17:57.122653 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-19 22:17:57.122664 | orchestrator | Monday 19 May 2025 22:16:07 +0000 (0:00:03.445) 0:00:05.332 ************ 2025-05-19 22:17:57.122718 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-19 22:17:57.122752 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-19 22:17:57.122771 | orchestrator | 2025-05-19 22:17:57.122790 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-19 22:17:57.122810 | orchestrator | Monday 19 May 2025 22:16:13 +0000 (0:00:06.503) 0:00:11.835 ************ 2025-05-19 22:17:57.122829 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:17:57.122847 | orchestrator | 2025-05-19 22:17:57.122866 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-19 22:17:57.122886 | orchestrator | Monday 19 May 2025 22:16:16 +0000 (0:00:02.996) 0:00:14.832 ************ 2025-05-19 22:17:57.122924 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:17:57.122936 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-19 22:17:57.122947 | orchestrator | 2025-05-19 22:17:57.122958 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-19 22:17:57.122968 | orchestrator | Monday 19 May 2025 22:16:20 +0000 (0:00:03.624) 0:00:18.457 ************ 2025-05-19 22:17:57.122979 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:17:57.122990 | orchestrator | 2025-05-19 22:17:57.123000 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-19 22:17:57.123019 | orchestrator | Monday 19 May 2025 22:16:23 +0000 (0:00:03.237) 0:00:21.694 ************ 2025-05-19 22:17:57.123030 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-19 22:17:57.123041 | orchestrator | 2025-05-19 22:17:57.123075 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-19 22:17:57.123124 | orchestrator | Monday 19 May 2025 22:16:27 +0000 (0:00:03.980) 0:00:25.675 ************ 2025-05-19 22:17:57.123136 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.123146 | orchestrator | 2025-05-19 22:17:57.123157 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-19 22:17:57.123168 | orchestrator | Monday 19 May 2025 22:16:30 +0000 (0:00:03.262) 0:00:28.937 ************ 2025-05-19 22:17:57.123190 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.123201 | orchestrator | 2025-05-19 22:17:57.123212 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-19 22:17:57.123223 | orchestrator | Monday 19 May 2025 22:16:34 +0000 (0:00:03.664) 0:00:32.602 ************ 2025-05-19 22:17:57.123234 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.123245 | orchestrator | 2025-05-19 22:17:57.123256 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-19 22:17:57.123266 | orchestrator | Monday 19 May 2025 22:16:38 +0000 (0:00:03.708) 0:00:36.310 ************ 2025-05-19 22:17:57.123281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123377 | orchestrator | 2025-05-19 22:17:57.123388 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-19 22:17:57.123400 | orchestrator | Monday 19 May 2025 22:16:39 +0000 (0:00:01.467) 0:00:37.777 ************ 2025-05-19 22:17:57.123410 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.123421 | orchestrator | 2025-05-19 22:17:57.123432 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-19 22:17:57.123443 | orchestrator | Monday 19 May 2025 22:16:39 +0000 (0:00:00.114) 0:00:37.892 ************ 2025-05-19 22:17:57.123454 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.123465 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:17:57.123475 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:17:57.123486 | orchestrator | 2025-05-19 22:17:57.123497 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-19 22:17:57.123508 | orchestrator | Monday 19 May 2025 22:16:40 +0000 (0:00:00.566) 0:00:38.459 ************ 2025-05-19 22:17:57.123518 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:17:57.123529 | orchestrator | 2025-05-19 22:17:57.123540 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-19 22:17:57.123551 | orchestrator | Monday 19 May 2025 22:16:41 +0000 (0:00:01.021) 0:00:39.481 ************ 2025-05-19 22:17:57.123562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123650 | orchestrator | 2025-05-19 22:17:57.123662 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-19 22:17:57.123679 | orchestrator | Monday 19 May 2025 22:16:43 +0000 (0:00:02.394) 0:00:41.876 ************ 2025-05-19 22:17:57.123690 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:17:57.123701 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:17:57.123712 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:17:57.123723 | orchestrator | 2025-05-19 22:17:57.123734 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 22:17:57.123750 | orchestrator | Monday 19 May 2025 22:16:44 +0000 (0:00:00.334) 0:00:42.210 ************ 2025-05-19 22:17:57.123762 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:17:57.123773 | orchestrator | 2025-05-19 22:17:57.123783 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-19 22:17:57.123794 | orchestrator | Monday 19 May 2025 22:16:44 +0000 (0:00:00.815) 0:00:43.026 ************ 2025-05-19 22:17:57.123810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.123845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.123900 | orchestrator | 2025-05-19 22:17:57.123912 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-19 22:17:57.123923 | orchestrator | Monday 19 May 2025 22:16:47 +0000 (0:00:02.508) 0:00:45.535 ************ 2025-05-19 22:17:57.123934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.123945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.123956 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.123968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124017 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:17:57.124028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124089 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:17:57.124104 | orchestrator | 2025-05-19 22:17:57.124115 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-19 22:17:57.124126 | orchestrator | Monday 19 May 2025 22:16:48 +0000 (0:00:00.679) 0:00:46.214 ************ 2025-05-19 22:17:57.124138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124168 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.124192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124215 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:17:57.124227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124256 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:17:57.124267 | orchestrator | 2025-05-19 22:17:57.124279 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-19 22:17:57.124290 | orchestrator | Monday 19 May 2025 22:16:49 +0000 (0:00:01.305) 0:00:47.519 ************ 2025-05-19 22:17:57.124587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124688 | orchestrator | 2025-05-19 22:17:57.124699 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-19 22:17:57.124710 | orchestrator | Monday 19 May 2025 22:16:51 +0000 (0:00:02.333) 0:00:49.852 ************ 2025-05-19 22:17:57.124726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.124784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.124859 | orchestrator | 2025-05-19 22:17:57.124907 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-19 22:17:57.124920 | orchestrator | Monday 19 May 2025 22:16:57 +0000 (0:00:05.344) 0:00:55.197 ************ 2025-05-19 22:17:57.124932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.124963 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.124974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.124999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.125011 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:17:57.125022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-19 22:17:57.125033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:17:57.125106 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:17:57.125122 | orchestrator | 2025-05-19 22:17:57.125133 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-19 22:17:57.125144 | orchestrator | Monday 19 May 2025 22:16:57 +0000 (0:00:00.822) 0:00:56.020 ************ 2025-05-19 22:17:57.125155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.125174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.125191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-19 22:17:57.125205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.125231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.125251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:17:57.125270 | orchestrator | 2025-05-19 22:17:57.125290 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-19 22:17:57.125309 | orchestrator | Monday 19 May 2025 22:17:00 +0000 (0:00:02.169) 0:00:58.189 ************ 2025-05-19 22:17:57.125323 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:17:57.125334 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:17:57.125344 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:17:57.125355 | orchestrator | 2025-05-19 22:17:57.125365 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-19 22:17:57.125376 | orchestrator | Monday 19 May 2025 22:17:00 +0000 (0:00:00.291) 0:00:58.481 ************ 2025-05-19 22:17:57.125387 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.125398 | orchestrator | 2025-05-19 22:17:57.125409 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-19 22:17:57.125420 | orchestrator | Monday 19 May 2025 22:17:02 +0000 (0:00:02.292) 0:01:00.773 ************ 2025-05-19 22:17:57.125431 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.125442 | orchestrator | 2025-05-19 22:17:57.125452 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-19 22:17:57.125464 | orchestrator | Monday 19 May 2025 22:17:05 +0000 (0:00:02.862) 0:01:03.635 ************ 2025-05-19 22:17:57.125481 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.125491 | orchestrator | 2025-05-19 22:17:57.125501 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 22:17:57.125510 | orchestrator | Monday 19 May 2025 22:17:23 +0000 (0:00:18.398) 0:01:22.034 ************ 2025-05-19 22:17:57.125520 | orchestrator | 2025-05-19 22:17:57.125529 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 22:17:57.125539 | orchestrator | Monday 19 May 2025 22:17:24 +0000 (0:00:00.066) 0:01:22.100 ************ 2025-05-19 22:17:57.125548 | orchestrator | 2025-05-19 22:17:57.125566 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-19 22:17:57.125576 | orchestrator | Monday 19 May 2025 22:17:24 +0000 (0:00:00.060) 0:01:22.160 ************ 2025-05-19 22:17:57.125585 | orchestrator | 2025-05-19 22:17:57.125595 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-19 22:17:57.125604 | orchestrator | Monday 19 May 2025 22:17:24 +0000 (0:00:00.064) 0:01:22.225 ************ 2025-05-19 22:17:57.125620 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.125630 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:17:57.125639 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:17:57.125650 | orchestrator | 2025-05-19 22:17:57.125667 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-19 22:17:57.125682 | orchestrator | Monday 19 May 2025 22:17:42 +0000 (0:00:18.453) 0:01:40.678 ************ 2025-05-19 22:17:57.125698 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:17:57.125714 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:17:57.125730 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:17:57.125745 | orchestrator | 2025-05-19 22:17:57.125763 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:17:57.125774 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-19 22:17:57.125785 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:17:57.125794 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:17:57.125804 | orchestrator | 2025-05-19 22:17:57.125813 | orchestrator | 2025-05-19 22:17:57.125822 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:17:57.125832 | orchestrator | Monday 19 May 2025 22:17:54 +0000 (0:00:11.677) 0:01:52.356 ************ 2025-05-19 22:17:57.125841 | orchestrator | =============================================================================== 2025-05-19 22:17:57.125851 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 18.45s 2025-05-19 22:17:57.125861 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.40s 2025-05-19 22:17:57.125870 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.68s 2025-05-19 22:17:57.125879 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.50s 2025-05-19 22:17:57.125889 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.34s 2025-05-19 22:17:57.125898 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.98s 2025-05-19 22:17:57.125907 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.71s 2025-05-19 22:17:57.125917 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.66s 2025-05-19 22:17:57.125926 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.62s 2025-05-19 22:17:57.125936 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.45s 2025-05-19 22:17:57.125945 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.26s 2025-05-19 22:17:57.125954 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.24s 2025-05-19 22:17:57.125964 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.00s 2025-05-19 22:17:57.125973 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.86s 2025-05-19 22:17:57.125982 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.51s 2025-05-19 22:17:57.125992 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.39s 2025-05-19 22:17:57.126001 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.33s 2025-05-19 22:17:57.126010 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.29s 2025-05-19 22:17:57.126085 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.17s 2025-05-19 22:17:57.126100 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.47s 2025-05-19 22:17:57.126110 | orchestrator | 2025-05-19 22:17:57 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:17:57.126131 | orchestrator | 2025-05-19 22:17:57 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:17:57.126141 | orchestrator | 2025-05-19 22:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:00.178922 | orchestrator | 2025-05-19 22:18:00 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:18:00.179036 | orchestrator | 2025-05-19 22:18:00 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:00.179104 | orchestrator | 2025-05-19 22:18:00 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:00.179117 | orchestrator | 2025-05-19 22:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:03.229639 | orchestrator | 2025-05-19 22:18:03 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:18:03.230244 | orchestrator | 2025-05-19 22:18:03 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:03.231036 | orchestrator | 2025-05-19 22:18:03 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:03.231088 | orchestrator | 2025-05-19 22:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:06.285561 | orchestrator | 2025-05-19 22:18:06 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:18:06.286941 | orchestrator | 2025-05-19 22:18:06 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:06.288532 | orchestrator | 2025-05-19 22:18:06 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:06.288567 | orchestrator | 2025-05-19 22:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:09.334191 | orchestrator | 2025-05-19 22:18:09 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state STARTED 2025-05-19 22:18:09.335957 | orchestrator | 2025-05-19 22:18:09 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:09.338193 | orchestrator | 2025-05-19 22:18:09 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:09.338249 | orchestrator | 2025-05-19 22:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:12.415609 | orchestrator | 2025-05-19 22:18:12.415724 | orchestrator | 2025-05-19 22:18:12.415740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:18:12.415753 | orchestrator | 2025-05-19 22:18:12.415934 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-19 22:18:12.415949 | orchestrator | Monday 19 May 2025 22:09:09 +0000 (0:00:00.257) 0:00:00.257 ************ 2025-05-19 22:18:12.416062 | orchestrator | changed: [testbed-manager] 2025-05-19 22:18:12.416079 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416092 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.416105 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.416117 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.416129 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.416141 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.416154 | orchestrator | 2025-05-19 22:18:12.416167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:18:12.416179 | orchestrator | Monday 19 May 2025 22:09:10 +0000 (0:00:00.773) 0:00:01.031 ************ 2025-05-19 22:18:12.416191 | orchestrator | changed: [testbed-manager] 2025-05-19 22:18:12.416203 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416215 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.416227 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.416239 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.416251 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.416316 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.416329 | orchestrator | 2025-05-19 22:18:12.416342 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:18:12.416355 | orchestrator | Monday 19 May 2025 22:09:11 +0000 (0:00:00.488) 0:00:01.520 ************ 2025-05-19 22:18:12.416367 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-19 22:18:12.416380 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-19 22:18:12.416392 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-19 22:18:12.416404 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-19 22:18:12.416417 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-19 22:18:12.416429 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-19 22:18:12.416442 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-19 22:18:12.416452 | orchestrator | 2025-05-19 22:18:12.416463 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-19 22:18:12.416490 | orchestrator | 2025-05-19 22:18:12.416502 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 22:18:12.416513 | orchestrator | Monday 19 May 2025 22:09:11 +0000 (0:00:00.685) 0:00:02.206 ************ 2025-05-19 22:18:12.416524 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.416535 | orchestrator | 2025-05-19 22:18:12.416546 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-19 22:18:12.416557 | orchestrator | Monday 19 May 2025 22:09:12 +0000 (0:00:00.577) 0:00:02.783 ************ 2025-05-19 22:18:12.416568 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-19 22:18:12.416600 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-19 22:18:12.416612 | orchestrator | 2025-05-19 22:18:12.416623 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-19 22:18:12.416634 | orchestrator | Monday 19 May 2025 22:09:15 +0000 (0:00:03.397) 0:00:06.181 ************ 2025-05-19 22:18:12.416644 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:18:12.416655 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-19 22:18:12.416666 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416677 | orchestrator | 2025-05-19 22:18:12.416687 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 22:18:12.416807 | orchestrator | Monday 19 May 2025 22:09:19 +0000 (0:00:03.525) 0:00:09.706 ************ 2025-05-19 22:18:12.416820 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416831 | orchestrator | 2025-05-19 22:18:12.416842 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-19 22:18:12.416853 | orchestrator | Monday 19 May 2025 22:09:20 +0000 (0:00:00.655) 0:00:10.362 ************ 2025-05-19 22:18:12.416864 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416875 | orchestrator | 2025-05-19 22:18:12.416885 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-19 22:18:12.416912 | orchestrator | Monday 19 May 2025 22:09:21 +0000 (0:00:01.468) 0:00:11.830 ************ 2025-05-19 22:18:12.416965 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.416976 | orchestrator | 2025-05-19 22:18:12.416988 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 22:18:12.416999 | orchestrator | Monday 19 May 2025 22:09:24 +0000 (0:00:03.088) 0:00:14.919 ************ 2025-05-19 22:18:12.417010 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.417021 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417062 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417073 | orchestrator | 2025-05-19 22:18:12.417084 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 22:18:12.417095 | orchestrator | Monday 19 May 2025 22:09:25 +0000 (0:00:00.532) 0:00:15.452 ************ 2025-05-19 22:18:12.417106 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.417117 | orchestrator | 2025-05-19 22:18:12.417137 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-19 22:18:12.417149 | orchestrator | Monday 19 May 2025 22:09:51 +0000 (0:00:26.762) 0:00:42.215 ************ 2025-05-19 22:18:12.417160 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.417171 | orchestrator | 2025-05-19 22:18:12.417181 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 22:18:12.417192 | orchestrator | Monday 19 May 2025 22:10:03 +0000 (0:00:12.006) 0:00:54.221 ************ 2025-05-19 22:18:12.417203 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.417214 | orchestrator | 2025-05-19 22:18:12.417225 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 22:18:12.417236 | orchestrator | Monday 19 May 2025 22:10:13 +0000 (0:00:09.758) 0:01:03.980 ************ 2025-05-19 22:18:12.417268 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.417280 | orchestrator | 2025-05-19 22:18:12.417291 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-19 22:18:12.417302 | orchestrator | Monday 19 May 2025 22:10:14 +0000 (0:00:01.011) 0:01:04.992 ************ 2025-05-19 22:18:12.417343 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.417355 | orchestrator | 2025-05-19 22:18:12.417366 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 22:18:12.417377 | orchestrator | Monday 19 May 2025 22:10:15 +0000 (0:00:00.445) 0:01:05.438 ************ 2025-05-19 22:18:12.417389 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.417400 | orchestrator | 2025-05-19 22:18:12.417443 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-19 22:18:12.417456 | orchestrator | Monday 19 May 2025 22:10:15 +0000 (0:00:00.517) 0:01:05.956 ************ 2025-05-19 22:18:12.417466 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.417477 | orchestrator | 2025-05-19 22:18:12.417488 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 22:18:12.417499 | orchestrator | Monday 19 May 2025 22:10:31 +0000 (0:00:16.058) 0:01:22.015 ************ 2025-05-19 22:18:12.417510 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.417521 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417532 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417543 | orchestrator | 2025-05-19 22:18:12.417554 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-19 22:18:12.417564 | orchestrator | 2025-05-19 22:18:12.417575 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-19 22:18:12.417628 | orchestrator | Monday 19 May 2025 22:10:32 +0000 (0:00:00.344) 0:01:22.359 ************ 2025-05-19 22:18:12.417641 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.417652 | orchestrator | 2025-05-19 22:18:12.417663 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-19 22:18:12.417674 | orchestrator | Monday 19 May 2025 22:10:32 +0000 (0:00:00.729) 0:01:23.089 ************ 2025-05-19 22:18:12.417684 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417695 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417706 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.417717 | orchestrator | 2025-05-19 22:18:12.417727 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-19 22:18:12.417738 | orchestrator | Monday 19 May 2025 22:10:34 +0000 (0:00:02.096) 0:01:25.185 ************ 2025-05-19 22:18:12.417749 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417760 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417771 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.417781 | orchestrator | 2025-05-19 22:18:12.417792 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 22:18:12.417803 | orchestrator | Monday 19 May 2025 22:10:37 +0000 (0:00:02.177) 0:01:27.363 ************ 2025-05-19 22:18:12.417813 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.417833 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417844 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417855 | orchestrator | 2025-05-19 22:18:12.417866 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 22:18:12.417877 | orchestrator | Monday 19 May 2025 22:10:37 +0000 (0:00:00.339) 0:01:27.703 ************ 2025-05-19 22:18:12.417888 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 22:18:12.417899 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.417910 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 22:18:12.417921 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.417931 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-19 22:18:12.417942 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-19 22:18:12.417953 | orchestrator | 2025-05-19 22:18:12.417964 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-19 22:18:12.417975 | orchestrator | Monday 19 May 2025 22:10:44 +0000 (0:00:07.458) 0:01:35.161 ************ 2025-05-19 22:18:12.417986 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.417997 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418007 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418125 | orchestrator | 2025-05-19 22:18:12.418142 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-19 22:18:12.418192 | orchestrator | Monday 19 May 2025 22:10:45 +0000 (0:00:00.292) 0:01:35.454 ************ 2025-05-19 22:18:12.418205 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-19 22:18:12.418216 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.418227 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-19 22:18:12.418238 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418248 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-19 22:18:12.418259 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418270 | orchestrator | 2025-05-19 22:18:12.418280 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 22:18:12.418291 | orchestrator | Monday 19 May 2025 22:10:45 +0000 (0:00:00.619) 0:01:36.073 ************ 2025-05-19 22:18:12.418302 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418313 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418324 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.418334 | orchestrator | 2025-05-19 22:18:12.418345 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-19 22:18:12.418356 | orchestrator | Monday 19 May 2025 22:10:46 +0000 (0:00:00.488) 0:01:36.563 ************ 2025-05-19 22:18:12.418367 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418377 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418388 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.418399 | orchestrator | 2025-05-19 22:18:12.418409 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-19 22:18:12.418420 | orchestrator | Monday 19 May 2025 22:10:47 +0000 (0:00:01.020) 0:01:37.583 ************ 2025-05-19 22:18:12.418431 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418442 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418467 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.418478 | orchestrator | 2025-05-19 22:18:12.418489 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-19 22:18:12.418500 | orchestrator | Monday 19 May 2025 22:10:49 +0000 (0:00:02.468) 0:01:40.052 ************ 2025-05-19 22:18:12.418511 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418521 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418532 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.418543 | orchestrator | 2025-05-19 22:18:12.418553 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 22:18:12.418564 | orchestrator | Monday 19 May 2025 22:11:09 +0000 (0:00:19.983) 0:02:00.035 ************ 2025-05-19 22:18:12.418575 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418595 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418605 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.418615 | orchestrator | 2025-05-19 22:18:12.418625 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 22:18:12.418634 | orchestrator | Monday 19 May 2025 22:11:21 +0000 (0:00:11.389) 0:02:11.424 ************ 2025-05-19 22:18:12.418644 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.418654 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418701 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418711 | orchestrator | 2025-05-19 22:18:12.418721 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-19 22:18:12.418731 | orchestrator | Monday 19 May 2025 22:11:22 +0000 (0:00:01.133) 0:02:12.558 ************ 2025-05-19 22:18:12.418740 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418750 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418760 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.418770 | orchestrator | 2025-05-19 22:18:12.418779 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-19 22:18:12.418789 | orchestrator | Monday 19 May 2025 22:11:33 +0000 (0:00:11.122) 0:02:23.681 ************ 2025-05-19 22:18:12.418799 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.418809 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418818 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418852 | orchestrator | 2025-05-19 22:18:12.418871 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-19 22:18:12.418881 | orchestrator | Monday 19 May 2025 22:11:35 +0000 (0:00:01.876) 0:02:25.557 ************ 2025-05-19 22:18:12.418891 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.418901 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.418910 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.418920 | orchestrator | 2025-05-19 22:18:12.418930 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-19 22:18:12.418940 | orchestrator | 2025-05-19 22:18:12.418949 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 22:18:12.418959 | orchestrator | Monday 19 May 2025 22:11:35 +0000 (0:00:00.378) 0:02:25.936 ************ 2025-05-19 22:18:12.418969 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.418980 | orchestrator | 2025-05-19 22:18:12.418990 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-19 22:18:12.419000 | orchestrator | Monday 19 May 2025 22:11:36 +0000 (0:00:00.562) 0:02:26.499 ************ 2025-05-19 22:18:12.419010 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-19 22:18:12.419020 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-19 22:18:12.419054 | orchestrator | 2025-05-19 22:18:12.419064 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-19 22:18:12.419074 | orchestrator | Monday 19 May 2025 22:11:39 +0000 (0:00:03.044) 0:02:29.543 ************ 2025-05-19 22:18:12.419084 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-19 22:18:12.419096 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-19 22:18:12.419106 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-19 22:18:12.419120 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-19 22:18:12.419130 | orchestrator | 2025-05-19 22:18:12.419140 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-19 22:18:12.419150 | orchestrator | Monday 19 May 2025 22:11:45 +0000 (0:00:06.564) 0:02:36.108 ************ 2025-05-19 22:18:12.419160 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:18:12.419177 | orchestrator | 2025-05-19 22:18:12.419187 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-19 22:18:12.419196 | orchestrator | Monday 19 May 2025 22:11:48 +0000 (0:00:03.089) 0:02:39.197 ************ 2025-05-19 22:18:12.419206 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:18:12.419216 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-19 22:18:12.419225 | orchestrator | 2025-05-19 22:18:12.419235 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-19 22:18:12.419245 | orchestrator | Monday 19 May 2025 22:11:52 +0000 (0:00:03.632) 0:02:42.830 ************ 2025-05-19 22:18:12.419254 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:18:12.419264 | orchestrator | 2025-05-19 22:18:12.419273 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-19 22:18:12.419283 | orchestrator | Monday 19 May 2025 22:11:55 +0000 (0:00:02.983) 0:02:45.813 ************ 2025-05-19 22:18:12.419293 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-19 22:18:12.419302 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-19 22:18:12.419312 | orchestrator | 2025-05-19 22:18:12.419322 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-19 22:18:12.419348 | orchestrator | Monday 19 May 2025 22:12:02 +0000 (0:00:07.429) 0:02:53.243 ************ 2025-05-19 22:18:12.419364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419457 | orchestrator | 2025-05-19 22:18:12.419467 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-19 22:18:12.419476 | orchestrator | Monday 19 May 2025 22:12:04 +0000 (0:00:01.812) 0:02:55.056 ************ 2025-05-19 22:18:12.419486 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.419496 | orchestrator | 2025-05-19 22:18:12.419523 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-19 22:18:12.419534 | orchestrator | Monday 19 May 2025 22:12:04 +0000 (0:00:00.128) 0:02:55.184 ************ 2025-05-19 22:18:12.419543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.419553 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.419563 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.419572 | orchestrator | 2025-05-19 22:18:12.419582 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-19 22:18:12.419591 | orchestrator | Monday 19 May 2025 22:12:05 +0000 (0:00:00.403) 0:02:55.588 ************ 2025-05-19 22:18:12.419601 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:18:12.419617 | orchestrator | 2025-05-19 22:18:12.419627 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-19 22:18:12.419636 | orchestrator | Monday 19 May 2025 22:12:05 +0000 (0:00:00.587) 0:02:56.175 ************ 2025-05-19 22:18:12.419646 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.419656 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.419665 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.419675 | orchestrator | 2025-05-19 22:18:12.419684 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-19 22:18:12.419694 | orchestrator | Monday 19 May 2025 22:12:06 +0000 (0:00:00.261) 0:02:56.437 ************ 2025-05-19 22:18:12.419704 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.419713 | orchestrator | 2025-05-19 22:18:12.419723 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 22:18:12.419732 | orchestrator | Monday 19 May 2025 22:12:07 +0000 (0:00:01.001) 0:02:57.439 ************ 2025-05-19 22:18:12.419748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.419809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.419846 | orchestrator | 2025-05-19 22:18:12.419856 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 22:18:12.419866 | orchestrator | Monday 19 May 2025 22:12:10 +0000 (0:00:02.984) 0:03:00.424 ************ 2025-05-19 22:18:12.419877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.419896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.419906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.419921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.419932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.419942 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.419960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.419971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.419988 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.419998 | orchestrator | 2025-05-19 22:18:12.420007 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 22:18:12.420017 | orchestrator | Monday 19 May 2025 22:12:11 +0000 (0:00:00.926) 0:03:01.350 ************ 2025-05-19 22:18:12.420050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420072 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.420090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-05-19 22:18:12 | INFO  | Task eb47a7a0-fab5-4346-8b97-a2d6fabc29f2 is in state SUCCESS 2025-05-19 22:18:12.420103 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420130 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.420141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420167 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.420176 | orchestrator | 2025-05-19 22:18:12.420186 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-19 22:18:12.420196 | orchestrator | Monday 19 May 2025 22:12:11 +0000 (0:00:00.807) 0:03:02.158 ************ 2025-05-19 22:18:12.420214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420303 | orchestrator | 2025-05-19 22:18:12.420313 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-19 22:18:12.420323 | orchestrator | Monday 19 May 2025 22:12:14 +0000 (0:00:02.289) 0:03:04.448 ************ 2025-05-19 22:18:12.420333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420414 | orchestrator | 2025-05-19 22:18:12.420424 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-19 22:18:12.420434 | orchestrator | Monday 19 May 2025 22:12:21 +0000 (0:00:06.940) 0:03:11.388 ************ 2025-05-19 22:18:12.420449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420484 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.420494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420515 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.420532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-19 22:18:12.420543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.420553 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.420563 | orchestrator | 2025-05-19 22:18:12.420573 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-19 22:18:12.420588 | orchestrator | Monday 19 May 2025 22:12:21 +0000 (0:00:00.534) 0:03:11.922 ************ 2025-05-19 22:18:12.420604 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.420614 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.420624 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.420633 | orchestrator | 2025-05-19 22:18:12.420643 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-19 22:18:12.420653 | orchestrator | Monday 19 May 2025 22:12:23 +0000 (0:00:02.014) 0:03:13.937 ************ 2025-05-19 22:18:12.420662 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.420672 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.420682 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.420691 | orchestrator | 2025-05-19 22:18:12.420701 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-19 22:18:12.420710 | orchestrator | Monday 19 May 2025 22:12:24 +0000 (0:00:00.430) 0:03:14.368 ************ 2025-05-19 22:18:12.420720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-19 22:18:12.420771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.420802 | orchestrator | 2025-05-19 22:18:12.420812 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 22:18:12.420821 | orchestrator | Monday 19 May 2025 22:12:25 +0000 (0:00:01.736) 0:03:16.104 ************ 2025-05-19 22:18:12.420831 | orchestrator | 2025-05-19 22:18:12.420841 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 22:18:12.420851 | orchestrator | Monday 19 May 2025 22:12:25 +0000 (0:00:00.182) 0:03:16.287 ************ 2025-05-19 22:18:12.420860 | orchestrator | 2025-05-19 22:18:12.420870 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-19 22:18:12.420879 | orchestrator | Monday 19 May 2025 22:12:26 +0000 (0:00:00.273) 0:03:16.561 ************ 2025-05-19 22:18:12.420889 | orchestrator | 2025-05-19 22:18:12.420899 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-19 22:18:12.420908 | orchestrator | Monday 19 May 2025 22:12:26 +0000 (0:00:00.728) 0:03:17.289 ************ 2025-05-19 22:18:12.420918 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.420928 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.420938 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.420947 | orchestrator | 2025-05-19 22:18:12.420957 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-19 22:18:12.420966 | orchestrator | Monday 19 May 2025 22:12:51 +0000 (0:00:24.912) 0:03:42.202 ************ 2025-05-19 22:18:12.420976 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.420986 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.421003 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.421012 | orchestrator | 2025-05-19 22:18:12.421088 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-19 22:18:12.421101 | orchestrator | 2025-05-19 22:18:12.421111 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 22:18:12.421121 | orchestrator | Monday 19 May 2025 22:12:59 +0000 (0:00:07.257) 0:03:49.459 ************ 2025-05-19 22:18:12.421131 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.421140 | orchestrator | 2025-05-19 22:18:12.421150 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 22:18:12.421160 | orchestrator | Monday 19 May 2025 22:13:01 +0000 (0:00:02.006) 0:03:51.465 ************ 2025-05-19 22:18:12.421169 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.421179 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.421189 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.421198 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.421208 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.421217 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.421227 | orchestrator | 2025-05-19 22:18:12.421236 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-19 22:18:12.421246 | orchestrator | Monday 19 May 2025 22:13:02 +0000 (0:00:01.491) 0:03:52.957 ************ 2025-05-19 22:18:12.421254 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.421262 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.421270 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.421278 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:18:12.421286 | orchestrator | 2025-05-19 22:18:12.421299 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-19 22:18:12.421307 | orchestrator | Monday 19 May 2025 22:13:03 +0000 (0:00:01.135) 0:03:54.092 ************ 2025-05-19 22:18:12.421316 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-19 22:18:12.421324 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-19 22:18:12.421332 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-19 22:18:12.421339 | orchestrator | 2025-05-19 22:18:12.421347 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-19 22:18:12.421355 | orchestrator | Monday 19 May 2025 22:13:04 +0000 (0:00:00.887) 0:03:54.979 ************ 2025-05-19 22:18:12.421363 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-19 22:18:12.421371 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-19 22:18:12.421379 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-19 22:18:12.421387 | orchestrator | 2025-05-19 22:18:12.421395 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-19 22:18:12.421403 | orchestrator | Monday 19 May 2025 22:13:06 +0000 (0:00:01.419) 0:03:56.399 ************ 2025-05-19 22:18:12.421411 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-19 22:18:12.421419 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.421427 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-19 22:18:12.421435 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.421442 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-19 22:18:12.421450 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.421458 | orchestrator | 2025-05-19 22:18:12.421466 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-19 22:18:12.421474 | orchestrator | Monday 19 May 2025 22:13:07 +0000 (0:00:00.978) 0:03:57.378 ************ 2025-05-19 22:18:12.421482 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 22:18:12.421490 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 22:18:12.421504 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.421512 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 22:18:12.421520 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 22:18:12.421527 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 22:18:12.421535 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 22:18:12.421543 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-19 22:18:12.421551 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.421559 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-19 22:18:12.421567 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-19 22:18:12.421575 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.421583 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 22:18:12.421591 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 22:18:12.421599 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-19 22:18:12.421607 | orchestrator | 2025-05-19 22:18:12.421615 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-19 22:18:12.421623 | orchestrator | Monday 19 May 2025 22:13:08 +0000 (0:00:01.365) 0:03:58.744 ************ 2025-05-19 22:18:12.421631 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.421638 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.421646 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.421654 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.421662 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.421670 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.421678 | orchestrator | 2025-05-19 22:18:12.421686 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-19 22:18:12.421694 | orchestrator | Monday 19 May 2025 22:13:10 +0000 (0:00:01.932) 0:04:00.676 ************ 2025-05-19 22:18:12.421706 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.421714 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.421722 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.421730 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.421738 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.421746 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.421753 | orchestrator | 2025-05-19 22:18:12.421761 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-19 22:18:12.421769 | orchestrator | Monday 19 May 2025 22:13:12 +0000 (0:00:01.897) 0:04:02.573 ************ 2025-05-19 22:18:12.421782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421828 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421901 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.421945 | orchestrator | 2025-05-19 22:18:12.421953 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 22:18:12.421961 | orchestrator | Monday 19 May 2025 22:13:16 +0000 (0:00:04.391) 0:04:06.964 ************ 2025-05-19 22:18:12.421970 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:12.421979 | orchestrator | 2025-05-19 22:18:12.421987 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-19 22:18:12.421995 | orchestrator | Monday 19 May 2025 22:13:17 +0000 (0:00:01.296) 0:04:08.261 ************ 2025-05-19 22:18:12.422003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422373 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.422386 | orchestrator | 2025-05-19 22:18:12.422395 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-19 22:18:12.422403 | orchestrator | Monday 19 May 2025 22:13:22 +0000 (0:00:04.676) 0:04:12.938 ************ 2025-05-19 22:18:12.422411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422437 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.422453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422475 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422484 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.422492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422516 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.422534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422557 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.422565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422582 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.422590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422607 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.422614 | orchestrator | 2025-05-19 22:18:12.422622 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-19 22:18:12.422630 | orchestrator | Monday 19 May 2025 22:13:26 +0000 (0:00:03.521) 0:04:16.460 ************ 2025-05-19 22:18:12.422647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422677 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.422686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422694 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422715 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.422753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422780 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.422789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422797 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.422805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.422814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.422837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422847 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.422857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.422866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.422874 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.422884 | orchestrator | 2025-05-19 22:18:12.422892 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 22:18:12.422902 | orchestrator | Monday 19 May 2025 22:13:31 +0000 (0:00:04.996) 0:04:21.456 ************ 2025-05-19 22:18:12.422911 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.422919 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.422928 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.422937 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-19 22:18:12.422946 | orchestrator | 2025-05-19 22:18:12.422955 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-19 22:18:12.422964 | orchestrator | Monday 19 May 2025 22:13:34 +0000 (0:00:03.080) 0:04:24.537 ************ 2025-05-19 22:18:12.422973 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 22:18:12.422982 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 22:18:12.422992 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 22:18:12.423001 | orchestrator | 2025-05-19 22:18:12.423010 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-19 22:18:12.423018 | orchestrator | Monday 19 May 2025 22:13:36 +0000 (0:00:01.892) 0:04:26.429 ************ 2025-05-19 22:18:12.423046 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 22:18:12.423055 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-19 22:18:12.423068 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-19 22:18:12.423077 | orchestrator | 2025-05-19 22:18:12.423086 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-19 22:18:12.423095 | orchestrator | Monday 19 May 2025 22:13:38 +0000 (0:00:02.309) 0:04:28.739 ************ 2025-05-19 22:18:12.423104 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:18:12.423113 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:18:12.423121 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:18:12.423130 | orchestrator | 2025-05-19 22:18:12.423139 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-19 22:18:12.423148 | orchestrator | Monday 19 May 2025 22:13:39 +0000 (0:00:00.720) 0:04:29.460 ************ 2025-05-19 22:18:12.423157 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:18:12.423166 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:18:12.423174 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:18:12.423182 | orchestrator | 2025-05-19 22:18:12.423190 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-19 22:18:12.423198 | orchestrator | Monday 19 May 2025 22:13:39 +0000 (0:00:00.560) 0:04:30.020 ************ 2025-05-19 22:18:12.423206 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 22:18:12.423214 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 22:18:12.423222 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 22:18:12.423230 | orchestrator | 2025-05-19 22:18:12.423238 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-19 22:18:12.423246 | orchestrator | Monday 19 May 2025 22:13:42 +0000 (0:00:02.446) 0:04:32.467 ************ 2025-05-19 22:18:12.423254 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 22:18:12.423262 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 22:18:12.423270 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 22:18:12.423277 | orchestrator | 2025-05-19 22:18:12.423285 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-19 22:18:12.423303 | orchestrator | Monday 19 May 2025 22:13:44 +0000 (0:00:01.975) 0:04:34.445 ************ 2025-05-19 22:18:12.423311 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-19 22:18:12.423319 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-19 22:18:12.423328 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-19 22:18:12.423335 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-19 22:18:12.423343 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-19 22:18:12.423351 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-19 22:18:12.423359 | orchestrator | 2025-05-19 22:18:12.423367 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-19 22:18:12.423375 | orchestrator | Monday 19 May 2025 22:13:51 +0000 (0:00:06.893) 0:04:41.339 ************ 2025-05-19 22:18:12.423383 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.423391 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.423399 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.423407 | orchestrator | 2025-05-19 22:18:12.423415 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-19 22:18:12.423423 | orchestrator | Monday 19 May 2025 22:13:51 +0000 (0:00:00.563) 0:04:41.902 ************ 2025-05-19 22:18:12.423431 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.423439 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.423447 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.423455 | orchestrator | 2025-05-19 22:18:12.423463 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-19 22:18:12.423471 | orchestrator | Monday 19 May 2025 22:13:52 +0000 (0:00:00.498) 0:04:42.401 ************ 2025-05-19 22:18:12.423479 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.423487 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.423494 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.423507 | orchestrator | 2025-05-19 22:18:12.423515 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-19 22:18:12.423523 | orchestrator | Monday 19 May 2025 22:13:54 +0000 (0:00:02.020) 0:04:44.422 ************ 2025-05-19 22:18:12.423531 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 22:18:12.423540 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 22:18:12.423548 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-19 22:18:12.423556 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 22:18:12.423564 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 22:18:12.423572 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-19 22:18:12.423580 | orchestrator | 2025-05-19 22:18:12.423588 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-19 22:18:12.423596 | orchestrator | Monday 19 May 2025 22:13:58 +0000 (0:00:04.743) 0:04:49.166 ************ 2025-05-19 22:18:12.423604 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:18:12.423612 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:18:12.423620 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:18:12.423628 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-19 22:18:12.423636 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.423644 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-19 22:18:12.423651 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.423659 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-19 22:18:12.423667 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.423675 | orchestrator | 2025-05-19 22:18:12.423683 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-19 22:18:12.423691 | orchestrator | Monday 19 May 2025 22:14:05 +0000 (0:00:06.414) 0:04:55.580 ************ 2025-05-19 22:18:12.423699 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.423707 | orchestrator | 2025-05-19 22:18:12.423715 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-19 22:18:12.423723 | orchestrator | Monday 19 May 2025 22:14:05 +0000 (0:00:00.200) 0:04:55.781 ************ 2025-05-19 22:18:12.423731 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.423739 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.423747 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.423754 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.423762 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.423770 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.423778 | orchestrator | 2025-05-19 22:18:12.423786 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-19 22:18:12.423794 | orchestrator | Monday 19 May 2025 22:14:06 +0000 (0:00:01.196) 0:04:56.978 ************ 2025-05-19 22:18:12.423802 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-19 22:18:12.423810 | orchestrator | 2025-05-19 22:18:12.423818 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-19 22:18:12.423826 | orchestrator | Monday 19 May 2025 22:14:08 +0000 (0:00:01.730) 0:04:58.709 ************ 2025-05-19 22:18:12.423834 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.423842 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.423850 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.423858 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.423870 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.423878 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.423886 | orchestrator | 2025-05-19 22:18:12.423901 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-19 22:18:12.423909 | orchestrator | Monday 19 May 2025 22:14:09 +0000 (0:00:01.023) 0:04:59.733 ************ 2025-05-19 22:18:12.423918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423935 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.423992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424070 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424161 | orchestrator | 2025-05-19 22:18:12.424170 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-19 22:18:12.424177 | orchestrator | Monday 19 May 2025 22:14:14 +0000 (0:00:04.706) 0:05:04.439 ************ 2025-05-19 22:18:12.424186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.424194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.424219 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.424228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.424236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.424244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.424253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.424376 | orchestrator | 2025-05-19 22:18:12.424384 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-19 22:18:12.424392 | orchestrator | Monday 19 May 2025 22:14:21 +0000 (0:00:07.654) 0:05:12.094 ************ 2025-05-19 22:18:12.424400 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.424408 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.424416 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.424424 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.424432 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.424440 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.424448 | orchestrator | 2025-05-19 22:18:12.424455 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-19 22:18:12.424463 | orchestrator | Monday 19 May 2025 22:14:23 +0000 (0:00:01.641) 0:05:13.735 ************ 2025-05-19 22:18:12.424471 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 22:18:12.424479 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 22:18:12.424487 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 22:18:12.424495 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 22:18:12.424503 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-19 22:18:12.424511 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-19 22:18:12.424518 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 22:18:12.424527 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.424534 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 22:18:12.424543 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.424550 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-19 22:18:12.424558 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.424566 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 22:18:12.424574 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 22:18:12.424582 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-19 22:18:12.424594 | orchestrator | 2025-05-19 22:18:12.424602 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-19 22:18:12.424610 | orchestrator | Monday 19 May 2025 22:14:27 +0000 (0:00:04.312) 0:05:18.047 ************ 2025-05-19 22:18:12.424618 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.424625 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.424631 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.424638 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.424645 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.424651 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.424658 | orchestrator | 2025-05-19 22:18:12.424664 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-19 22:18:12.424671 | orchestrator | Monday 19 May 2025 22:14:28 +0000 (0:00:00.880) 0:05:18.928 ************ 2025-05-19 22:18:12.424678 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 22:18:12.424684 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 22:18:12.424691 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 22:18:12.424698 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 22:18:12.424704 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-19 22:18:12.424711 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424718 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-19 22:18:12.424724 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424731 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424738 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424744 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.424751 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424767 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424774 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.424781 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424788 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424794 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-19 22:18:12.424801 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.424807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424814 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424821 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-19 22:18:12.424827 | orchestrator | 2025-05-19 22:18:12.424834 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-19 22:18:12.424841 | orchestrator | Monday 19 May 2025 22:14:35 +0000 (0:00:06.463) 0:05:25.392 ************ 2025-05-19 22:18:12.424852 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:18:12.424858 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:18:12.424865 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-19 22:18:12.424872 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 22:18:12.424878 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:18:12.424885 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:18:12.424891 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-19 22:18:12.424898 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 22:18:12.424905 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:18:12.424911 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-19 22:18:12.424918 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 22:18:12.424925 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.424931 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:18:12.424938 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-19 22:18:12.424945 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 22:18:12.424951 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:18:12.424958 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.424965 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:18:12.424971 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-19 22:18:12.424978 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.424984 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-19 22:18:12.424991 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:18:12.424998 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:18:12.425004 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-19 22:18:12.425011 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:18:12.425018 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:18:12.425038 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-19 22:18:12.425045 | orchestrator | 2025-05-19 22:18:12.425052 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-19 22:18:12.425059 | orchestrator | Monday 19 May 2025 22:14:41 +0000 (0:00:06.671) 0:05:32.064 ************ 2025-05-19 22:18:12.425065 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.425072 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.425078 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.425085 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425092 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425098 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425105 | orchestrator | 2025-05-19 22:18:12.425111 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-19 22:18:12.425118 | orchestrator | Monday 19 May 2025 22:14:42 +0000 (0:00:00.500) 0:05:32.565 ************ 2025-05-19 22:18:12.425125 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.425131 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.425142 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.425197 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425209 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425216 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425222 | orchestrator | 2025-05-19 22:18:12.425229 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-19 22:18:12.425236 | orchestrator | Monday 19 May 2025 22:14:42 +0000 (0:00:00.664) 0:05:33.229 ************ 2025-05-19 22:18:12.425242 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425249 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425255 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425262 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.425269 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.425275 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.425282 | orchestrator | 2025-05-19 22:18:12.425288 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-19 22:18:12.425295 | orchestrator | Monday 19 May 2025 22:14:45 +0000 (0:00:02.257) 0:05:35.487 ************ 2025-05-19 22:18:12.425302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.425309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.425317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.425343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.425350 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425357 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.425364 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.425371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-19 22:18:12.425379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-19 22:18:12.425386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425397 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.425404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.425418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425426 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.425440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425447 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-19 22:18:12.425461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-19 22:18:12.425468 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425479 | orchestrator | 2025-05-19 22:18:12.425485 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-19 22:18:12.425492 | orchestrator | Monday 19 May 2025 22:14:47 +0000 (0:00:01.951) 0:05:37.438 ************ 2025-05-19 22:18:12.425499 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 22:18:12.425506 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425513 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.425519 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 22:18:12.425526 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425533 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.425539 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 22:18:12.425546 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425553 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.425559 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 22:18:12.425566 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425573 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425579 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 22:18:12.425586 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425592 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425599 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 22:18:12.425609 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 22:18:12.425619 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425626 | orchestrator | 2025-05-19 22:18:12.425633 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-19 22:18:12.425639 | orchestrator | Monday 19 May 2025 22:14:47 +0000 (0:00:00.642) 0:05:38.081 ************ 2025-05-19 22:18:12.425646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425702 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-19 22:18:12.425781 | orchestrator | 2025-05-19 22:18:12.425788 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-19 22:18:12.425794 | orchestrator | Monday 19 May 2025 22:14:51 +0000 (0:00:03.263) 0:05:41.345 ************ 2025-05-19 22:18:12.425801 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.425808 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.425815 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.425821 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.425828 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.425835 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.425841 | orchestrator | 2025-05-19 22:18:12.425848 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425855 | orchestrator | Monday 19 May 2025 22:14:51 +0000 (0:00:00.561) 0:05:41.906 ************ 2025-05-19 22:18:12.425862 | orchestrator | 2025-05-19 22:18:12.425868 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425875 | orchestrator | Monday 19 May 2025 22:14:51 +0000 (0:00:00.375) 0:05:42.282 ************ 2025-05-19 22:18:12.425881 | orchestrator | 2025-05-19 22:18:12.425888 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425895 | orchestrator | Monday 19 May 2025 22:14:52 +0000 (0:00:00.133) 0:05:42.415 ************ 2025-05-19 22:18:12.425901 | orchestrator | 2025-05-19 22:18:12.425908 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425915 | orchestrator | Monday 19 May 2025 22:14:52 +0000 (0:00:00.127) 0:05:42.543 ************ 2025-05-19 22:18:12.425921 | orchestrator | 2025-05-19 22:18:12.425928 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425935 | orchestrator | Monday 19 May 2025 22:14:52 +0000 (0:00:00.125) 0:05:42.669 ************ 2025-05-19 22:18:12.425941 | orchestrator | 2025-05-19 22:18:12.425948 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-19 22:18:12.425955 | orchestrator | Monday 19 May 2025 22:14:52 +0000 (0:00:00.121) 0:05:42.790 ************ 2025-05-19 22:18:12.425961 | orchestrator | 2025-05-19 22:18:12.425968 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-19 22:18:12.425975 | orchestrator | Monday 19 May 2025 22:14:52 +0000 (0:00:00.121) 0:05:42.912 ************ 2025-05-19 22:18:12.425981 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.425988 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.425995 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.426001 | orchestrator | 2025-05-19 22:18:12.426008 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-19 22:18:12.426052 | orchestrator | Monday 19 May 2025 22:15:04 +0000 (0:00:12.377) 0:05:55.290 ************ 2025-05-19 22:18:12.426065 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.426078 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.426085 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.426092 | orchestrator | 2025-05-19 22:18:12.426099 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-19 22:18:12.426105 | orchestrator | Monday 19 May 2025 22:15:23 +0000 (0:00:18.306) 0:06:13.597 ************ 2025-05-19 22:18:12.426112 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.426119 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.426125 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.426132 | orchestrator | 2025-05-19 22:18:12.426139 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-19 22:18:12.426150 | orchestrator | Monday 19 May 2025 22:15:44 +0000 (0:00:21.609) 0:06:35.206 ************ 2025-05-19 22:18:12.426156 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.426163 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.426170 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.426176 | orchestrator | 2025-05-19 22:18:12.426183 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-19 22:18:12.426190 | orchestrator | Monday 19 May 2025 22:16:28 +0000 (0:00:43.864) 0:07:19.070 ************ 2025-05-19 22:18:12.426196 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.426203 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.426209 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.426216 | orchestrator | 2025-05-19 22:18:12.426223 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-19 22:18:12.426229 | orchestrator | Monday 19 May 2025 22:16:29 +0000 (0:00:01.203) 0:07:20.274 ************ 2025-05-19 22:18:12.426236 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.426243 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.426249 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.426256 | orchestrator | 2025-05-19 22:18:12.426262 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-19 22:18:12.426269 | orchestrator | Monday 19 May 2025 22:16:30 +0000 (0:00:00.795) 0:07:21.070 ************ 2025-05-19 22:18:12.426276 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:18:12.426282 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:18:12.426289 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:18:12.426296 | orchestrator | 2025-05-19 22:18:12.426302 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-19 22:18:12.426309 | orchestrator | Monday 19 May 2025 22:17:00 +0000 (0:00:29.879) 0:07:50.949 ************ 2025-05-19 22:18:12.426316 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.426322 | orchestrator | 2025-05-19 22:18:12.426329 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-19 22:18:12.426335 | orchestrator | Monday 19 May 2025 22:17:00 +0000 (0:00:00.135) 0:07:51.084 ************ 2025-05-19 22:18:12.426342 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.426349 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.426355 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.426362 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.426368 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.426375 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-19 22:18:12.426382 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:18:12.426389 | orchestrator | 2025-05-19 22:18:12.426396 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-19 22:18:12.426402 | orchestrator | Monday 19 May 2025 22:17:24 +0000 (0:00:23.747) 0:08:14.832 ************ 2025-05-19 22:18:12.426409 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.426416 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.426422 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.426429 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.426435 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.426442 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.426449 | orchestrator | 2025-05-19 22:18:12.426455 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-19 22:18:12.426462 | orchestrator | Monday 19 May 2025 22:17:35 +0000 (0:00:10.525) 0:08:25.357 ************ 2025-05-19 22:18:12.426469 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.426475 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.426482 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.426488 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.426495 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.426506 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-05-19 22:18:12.426513 | orchestrator | 2025-05-19 22:18:12.426520 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-19 22:18:12.426526 | orchestrator | Monday 19 May 2025 22:17:39 +0000 (0:00:03.973) 0:08:29.331 ************ 2025-05-19 22:18:12.426533 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:18:12.426540 | orchestrator | 2025-05-19 22:18:12.426546 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-19 22:18:12.426553 | orchestrator | Monday 19 May 2025 22:17:49 +0000 (0:00:10.919) 0:08:40.250 ************ 2025-05-19 22:18:12.426560 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:18:12.426566 | orchestrator | 2025-05-19 22:18:12.426573 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-19 22:18:12.426580 | orchestrator | Monday 19 May 2025 22:17:51 +0000 (0:00:01.535) 0:08:41.786 ************ 2025-05-19 22:18:12.426587 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.426593 | orchestrator | 2025-05-19 22:18:12.426600 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-19 22:18:12.426607 | orchestrator | Monday 19 May 2025 22:17:53 +0000 (0:00:01.547) 0:08:43.333 ************ 2025-05-19 22:18:12.426613 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:18:12.426620 | orchestrator | 2025-05-19 22:18:12.426627 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-19 22:18:12.426637 | orchestrator | Monday 19 May 2025 22:18:02 +0000 (0:00:09.929) 0:08:53.263 ************ 2025-05-19 22:18:12.426647 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:18:12.426654 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:18:12.426661 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:18:12.426667 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:12.426674 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:18:12.426680 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:18:12.426687 | orchestrator | 2025-05-19 22:18:12.426694 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-19 22:18:12.426700 | orchestrator | 2025-05-19 22:18:12.426707 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-19 22:18:12.426714 | orchestrator | Monday 19 May 2025 22:18:04 +0000 (0:00:01.948) 0:08:55.211 ************ 2025-05-19 22:18:12.426720 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:12.426727 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:12.426734 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:12.426740 | orchestrator | 2025-05-19 22:18:12.426747 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-19 22:18:12.426754 | orchestrator | 2025-05-19 22:18:12.426760 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-19 22:18:12.426767 | orchestrator | Monday 19 May 2025 22:18:06 +0000 (0:00:01.192) 0:08:56.403 ************ 2025-05-19 22:18:12.426774 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.426780 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.426787 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.426794 | orchestrator | 2025-05-19 22:18:12.426800 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-19 22:18:12.426807 | orchestrator | 2025-05-19 22:18:12.426814 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-19 22:18:12.426820 | orchestrator | Monday 19 May 2025 22:18:06 +0000 (0:00:00.516) 0:08:56.920 ************ 2025-05-19 22:18:12.426827 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-19 22:18:12.426834 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-19 22:18:12.426840 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-19 22:18:12.426847 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-19 22:18:12.426853 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-19 22:18:12.426864 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.426871 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-19 22:18:12.426878 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-19 22:18:12.426884 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-19 22:18:12.426891 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-19 22:18:12.426897 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-19 22:18:12.426904 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.426911 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:18:12.426917 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-19 22:18:12.426924 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-19 22:18:12.426930 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-19 22:18:12.426937 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-19 22:18:12.426943 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-19 22:18:12.426950 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.426957 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:18:12.426963 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-19 22:18:12.426970 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-19 22:18:12.426976 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-19 22:18:12.426983 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-19 22:18:12.426990 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-19 22:18:12.426996 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.427003 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:18:12.427010 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-19 22:18:12.427016 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-19 22:18:12.427036 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-19 22:18:12.427043 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-19 22:18:12.427050 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-19 22:18:12.427056 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.427063 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.427070 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.427077 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-19 22:18:12.427083 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-19 22:18:12.427090 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-19 22:18:12.427096 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-19 22:18:12.427103 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-19 22:18:12.427109 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-19 22:18:12.427116 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.427122 | orchestrator | 2025-05-19 22:18:12.427129 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-19 22:18:12.427136 | orchestrator | 2025-05-19 22:18:12.427142 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-19 22:18:12.427149 | orchestrator | Monday 19 May 2025 22:18:08 +0000 (0:00:01.523) 0:08:58.444 ************ 2025-05-19 22:18:12.427156 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-19 22:18:12.427169 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-19 22:18:12.427177 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.427183 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-19 22:18:12.427195 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-19 22:18:12.427202 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.427209 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-19 22:18:12.427215 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-19 22:18:12.427222 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.427229 | orchestrator | 2025-05-19 22:18:12.427235 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-19 22:18:12.427242 | orchestrator | 2025-05-19 22:18:12.427249 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-19 22:18:12.427255 | orchestrator | Monday 19 May 2025 22:18:08 +0000 (0:00:00.798) 0:08:59.243 ************ 2025-05-19 22:18:12.427262 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.427269 | orchestrator | 2025-05-19 22:18:12.427275 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-19 22:18:12.427282 | orchestrator | 2025-05-19 22:18:12.427288 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-19 22:18:12.427295 | orchestrator | Monday 19 May 2025 22:18:09 +0000 (0:00:00.697) 0:08:59.941 ************ 2025-05-19 22:18:12.427302 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:12.427309 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:12.427315 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:12.427322 | orchestrator | 2025-05-19 22:18:12.427328 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:18:12.427335 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:18:12.427343 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-19 22:18:12.427350 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 22:18:12.427357 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-19 22:18:12.427363 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-19 22:18:12.427370 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-19 22:18:12.427377 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-19 22:18:12.427383 | orchestrator | 2025-05-19 22:18:12.427390 | orchestrator | 2025-05-19 22:18:12.427397 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:18:12.427404 | orchestrator | Monday 19 May 2025 22:18:10 +0000 (0:00:00.445) 0:09:00.386 ************ 2025-05-19 22:18:12.427410 | orchestrator | =============================================================================== 2025-05-19 22:18:12.427417 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 43.86s 2025-05-19 22:18:12.427424 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.88s 2025-05-19 22:18:12.427430 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 26.76s 2025-05-19 22:18:12.427437 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.91s 2025-05-19 22:18:12.427443 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.75s 2025-05-19 22:18:12.427450 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.61s 2025-05-19 22:18:12.427457 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.98s 2025-05-19 22:18:12.427463 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.31s 2025-05-19 22:18:12.427474 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.06s 2025-05-19 22:18:12.427480 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.38s 2025-05-19 22:18:12.427487 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 12.01s 2025-05-19 22:18:12.427494 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.39s 2025-05-19 22:18:12.427501 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.12s 2025-05-19 22:18:12.427507 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.92s 2025-05-19 22:18:12.427514 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.53s 2025-05-19 22:18:12.427520 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.93s 2025-05-19 22:18:12.427527 | orchestrator | nova-cell : Get a list of existing cells -------------------------------- 9.76s 2025-05-19 22:18:12.427534 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.65s 2025-05-19 22:18:12.427540 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.46s 2025-05-19 22:18:12.427547 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.43s 2025-05-19 22:18:12.427572 | orchestrator | 2025-05-19 22:18:12 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:12.427579 | orchestrator | 2025-05-19 22:18:12 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:12.427586 | orchestrator | 2025-05-19 22:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:15.471808 | orchestrator | 2025-05-19 22:18:15 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:15.472673 | orchestrator | 2025-05-19 22:18:15 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state STARTED 2025-05-19 22:18:15.472795 | orchestrator | 2025-05-19 22:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:18.529001 | orchestrator | 2025-05-19 22:18:18 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:18.531459 | orchestrator | 2025-05-19 22:18:18 | INFO  | Task 78311c0e-9e82-46bf-857a-2355205439c8 is in state SUCCESS 2025-05-19 22:18:18.533643 | orchestrator | 2025-05-19 22:18:18.534214 | orchestrator | 2025-05-19 22:18:18.534239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:18:18.534255 | orchestrator | 2025-05-19 22:18:18.534266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:18:18.534278 | orchestrator | Monday 19 May 2025 22:16:04 +0000 (0:00:00.303) 0:00:00.303 ************ 2025-05-19 22:18:18.534289 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:18.534301 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:18:18.534312 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:18:18.534323 | orchestrator | 2025-05-19 22:18:18.534334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:18:18.534345 | orchestrator | Monday 19 May 2025 22:16:04 +0000 (0:00:00.380) 0:00:00.683 ************ 2025-05-19 22:18:18.534356 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-19 22:18:18.534368 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-19 22:18:18.534379 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-19 22:18:18.534390 | orchestrator | 2025-05-19 22:18:18.534401 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-19 22:18:18.534411 | orchestrator | 2025-05-19 22:18:18.534422 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 22:18:18.534433 | orchestrator | Monday 19 May 2025 22:16:05 +0000 (0:00:00.517) 0:00:01.201 ************ 2025-05-19 22:18:18.534444 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:18.534483 | orchestrator | 2025-05-19 22:18:18.534494 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-19 22:18:18.534505 | orchestrator | Monday 19 May 2025 22:16:05 +0000 (0:00:00.567) 0:00:01.768 ************ 2025-05-19 22:18:18.534519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534574 | orchestrator | 2025-05-19 22:18:18.534585 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-19 22:18:18.534596 | orchestrator | Monday 19 May 2025 22:16:06 +0000 (0:00:00.746) 0:00:02.515 ************ 2025-05-19 22:18:18.534607 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-19 22:18:18.534619 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-19 22:18:18.534630 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:18:18.534641 | orchestrator | 2025-05-19 22:18:18.534653 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-19 22:18:18.534666 | orchestrator | Monday 19 May 2025 22:16:07 +0000 (0:00:00.863) 0:00:03.378 ************ 2025-05-19 22:18:18.534678 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:18:18.534691 | orchestrator | 2025-05-19 22:18:18.534704 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-19 22:18:18.534716 | orchestrator | Monday 19 May 2025 22:16:08 +0000 (0:00:00.748) 0:00:04.126 ************ 2025-05-19 22:18:18.534784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.534835 | orchestrator | 2025-05-19 22:18:18.534848 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-19 22:18:18.534860 | orchestrator | Monday 19 May 2025 22:16:09 +0000 (0:00:01.506) 0:00:05.632 ************ 2025-05-19 22:18:18.534873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.534892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.534906 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.534919 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.534964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.534988 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.535001 | orchestrator | 2025-05-19 22:18:18.535053 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-19 22:18:18.535067 | orchestrator | Monday 19 May 2025 22:16:09 +0000 (0:00:00.373) 0:00:06.005 ************ 2025-05-19 22:18:18.535079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.535090 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.535101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.535112 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.535124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-19 22:18:18.535135 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.535146 | orchestrator | 2025-05-19 22:18:18.535157 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-19 22:18:18.535168 | orchestrator | Monday 19 May 2025 22:16:10 +0000 (0:00:00.895) 0:00:06.901 ************ 2025-05-19 22:18:18.535185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535272 | orchestrator | 2025-05-19 22:18:18.535283 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-19 22:18:18.535294 | orchestrator | Monday 19 May 2025 22:16:12 +0000 (0:00:01.248) 0:00:08.150 ************ 2025-05-19 22:18:18.535305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.535340 | orchestrator | 2025-05-19 22:18:18.535351 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-19 22:18:18.535362 | orchestrator | Monday 19 May 2025 22:16:13 +0000 (0:00:01.339) 0:00:09.489 ************ 2025-05-19 22:18:18.535373 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.535384 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.535396 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.535407 | orchestrator | 2025-05-19 22:18:18.535418 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-19 22:18:18.535429 | orchestrator | Monday 19 May 2025 22:16:13 +0000 (0:00:00.602) 0:00:10.091 ************ 2025-05-19 22:18:18.535440 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 22:18:18.535457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 22:18:18.535476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-19 22:18:18.535487 | orchestrator | 2025-05-19 22:18:18.535498 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-19 22:18:18.535509 | orchestrator | Monday 19 May 2025 22:16:15 +0000 (0:00:01.351) 0:00:11.443 ************ 2025-05-19 22:18:18.535520 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 22:18:18.535531 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 22:18:18.535541 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-19 22:18:18.535552 | orchestrator | 2025-05-19 22:18:18.535563 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-19 22:18:18.535574 | orchestrator | Monday 19 May 2025 22:16:16 +0000 (0:00:01.296) 0:00:12.740 ************ 2025-05-19 22:18:18.535615 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-19 22:18:18.535628 | orchestrator | 2025-05-19 22:18:18.535639 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-19 22:18:18.535650 | orchestrator | Monday 19 May 2025 22:16:17 +0000 (0:00:00.772) 0:00:13.513 ************ 2025-05-19 22:18:18.535661 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-19 22:18:18.535672 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-19 22:18:18.535683 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:18.535694 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:18:18.535705 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:18:18.535716 | orchestrator | 2025-05-19 22:18:18.535727 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-19 22:18:18.535738 | orchestrator | Monday 19 May 2025 22:16:18 +0000 (0:00:00.720) 0:00:14.234 ************ 2025-05-19 22:18:18.535749 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.535760 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.535771 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.535782 | orchestrator | 2025-05-19 22:18:18.535793 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-19 22:18:18.535804 | orchestrator | Monday 19 May 2025 22:16:18 +0000 (0:00:00.769) 0:00:15.004 ************ 2025-05-19 22:18:18.535816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090886, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5660908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090886, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5660908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090886, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5660908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090877, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5590906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090877, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5590906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090877, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5590906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090867, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5560906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090867, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5560906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090867, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5560906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090884, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.535989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090884, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090884, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090852, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5510905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090852, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5510905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090852, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5510905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090871, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090871, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090871, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090883, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5620906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090883, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5620906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090883, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5620906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090850, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5500906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090850, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5500906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090850, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5500906, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090797, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5380902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090797, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5380902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090797, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5380902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090856, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5520904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090856, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5520904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090856, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5520904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090822, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5430903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090822, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5430903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090822, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5430903, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090879, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5610907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090879, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5610907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090879, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5610907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090860, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5540905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090860, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5540905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090860, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5540905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090885, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090885, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090885, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5630908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090840, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5490904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090840, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5490904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090840, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5490904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090875, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090875, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090875, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5580907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090801, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5420904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090801, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5420904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090801, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5420904, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090830, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5470905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090830, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5470905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090830, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5470905, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090865, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5550907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090865, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5550907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090865, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5550907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090931, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5930912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090931, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5930912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090931, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5930912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090924, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.582091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090924, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.582091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090924, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.582091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090893, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5670907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090893, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5670907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090893, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5670907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090961, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6040914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090961, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6040914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090961, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6040914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090895, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5680907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090895, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5680907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.536996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090895, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5680907, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090956, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5990913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090956, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5990913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090956, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5990913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090963, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6090915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090963, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6090915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090963, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6090915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090948, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.595091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090948, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.595091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090948, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.595091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090954, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5980914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090954, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5980914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090954, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5980914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090897, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5690908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090897, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5690908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090897, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5690908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090926, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090926, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090926, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090964, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6100914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090964, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6100914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090964, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6100914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090959, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6010914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090959, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6010914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090959, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6010914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090908, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5730908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090908, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5730908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090908, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5730908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090902, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5700908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090902, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5700908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090902, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5700908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090912, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.575091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090912, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.575091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090912, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.575091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090914, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.581091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090914, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.581091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090914, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.581091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090928, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090928, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090928, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.583091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090953, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5970912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090953, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5970912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090953, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.5970912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090930, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.584091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090930, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.584091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090930, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.584091, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090968, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6120915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090968, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6120915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090968, 'dev': 217, 'nlink': 1, 'atime': 1747612937.0, 'mtime': 1747612937.0, 'ctime': 1747689822.6120915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-19 22:18:18.537684 | orchestrator | 2025-05-19 22:18:18.537694 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-19 22:18:18.537704 | orchestrator | Monday 19 May 2025 22:16:56 +0000 (0:00:37.103) 0:00:52.107 ************ 2025-05-19 22:18:18.537714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.537725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.537740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-19 22:18:18.537756 | orchestrator | 2025-05-19 22:18:18.537766 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-19 22:18:18.537776 | orchestrator | Monday 19 May 2025 22:16:56 +0000 (0:00:00.981) 0:00:53.089 ************ 2025-05-19 22:18:18.537786 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:18.537795 | orchestrator | 2025-05-19 22:18:18.537805 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-19 22:18:18.537815 | orchestrator | Monday 19 May 2025 22:16:59 +0000 (0:00:02.175) 0:00:55.264 ************ 2025-05-19 22:18:18.537825 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:18.537834 | orchestrator | 2025-05-19 22:18:18.537844 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 22:18:18.537854 | orchestrator | Monday 19 May 2025 22:17:02 +0000 (0:00:03.051) 0:00:58.315 ************ 2025-05-19 22:18:18.537864 | orchestrator | 2025-05-19 22:18:18.537874 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 22:18:18.537888 | orchestrator | Monday 19 May 2025 22:17:02 +0000 (0:00:00.147) 0:00:58.463 ************ 2025-05-19 22:18:18.537899 | orchestrator | 2025-05-19 22:18:18.537909 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-19 22:18:18.537932 | orchestrator | Monday 19 May 2025 22:17:02 +0000 (0:00:00.311) 0:00:58.774 ************ 2025-05-19 22:18:18.537942 | orchestrator | 2025-05-19 22:18:18.537962 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-19 22:18:18.537972 | orchestrator | Monday 19 May 2025 22:17:02 +0000 (0:00:00.307) 0:00:59.082 ************ 2025-05-19 22:18:18.537982 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.537992 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.538002 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:18:18.538065 | orchestrator | 2025-05-19 22:18:18.538098 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-19 22:18:18.538115 | orchestrator | Monday 19 May 2025 22:17:05 +0000 (0:00:02.546) 0:01:01.628 ************ 2025-05-19 22:18:18.538132 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.538149 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.538164 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-19 22:18:18.538183 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-19 22:18:18.538201 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-19 22:18:18.538219 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:18.538236 | orchestrator | 2025-05-19 22:18:18.538246 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-19 22:18:18.538256 | orchestrator | Monday 19 May 2025 22:17:43 +0000 (0:00:38.391) 0:01:40.019 ************ 2025-05-19 22:18:18.538266 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.538275 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:18:18.538285 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:18:18.538294 | orchestrator | 2025-05-19 22:18:18.538304 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-19 22:18:18.538314 | orchestrator | Monday 19 May 2025 22:18:11 +0000 (0:00:27.487) 0:02:07.507 ************ 2025-05-19 22:18:18.538324 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:18:18.538333 | orchestrator | 2025-05-19 22:18:18.538343 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-19 22:18:18.538363 | orchestrator | Monday 19 May 2025 22:18:13 +0000 (0:00:02.447) 0:02:09.954 ************ 2025-05-19 22:18:18.538373 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.538382 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:18:18.538392 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:18:18.538401 | orchestrator | 2025-05-19 22:18:18.538411 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-19 22:18:18.538421 | orchestrator | Monday 19 May 2025 22:18:14 +0000 (0:00:00.329) 0:02:10.283 ************ 2025-05-19 22:18:18.538432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-19 22:18:18.538462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-19 22:18:18.538481 | orchestrator | 2025-05-19 22:18:18.538491 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-19 22:18:18.538501 | orchestrator | Monday 19 May 2025 22:18:16 +0000 (0:00:02.189) 0:02:12.473 ************ 2025-05-19 22:18:18.538511 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:18:18.538520 | orchestrator | 2025-05-19 22:18:18.538530 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:18:18.538540 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:18:18.538557 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:18:18.538567 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:18:18.538577 | orchestrator | 2025-05-19 22:18:18.538587 | orchestrator | 2025-05-19 22:18:18.538596 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:18:18.538606 | orchestrator | Monday 19 May 2025 22:18:16 +0000 (0:00:00.256) 0:02:12.729 ************ 2025-05-19 22:18:18.538616 | orchestrator | =============================================================================== 2025-05-19 22:18:18.538626 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.39s 2025-05-19 22:18:18.538635 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.10s 2025-05-19 22:18:18.538645 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.49s 2025-05-19 22:18:18.538655 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 3.05s 2025-05-19 22:18:18.538664 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.54s 2025-05-19 22:18:18.538683 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.45s 2025-05-19 22:18:18.538693 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.19s 2025-05-19 22:18:18.538703 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.18s 2025-05-19 22:18:18.538713 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.51s 2025-05-19 22:18:18.538722 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.35s 2025-05-19 22:18:18.538732 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.34s 2025-05-19 22:18:18.538742 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.30s 2025-05-19 22:18:18.538758 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2025-05-19 22:18:18.538768 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.98s 2025-05-19 22:18:18.538777 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.90s 2025-05-19 22:18:18.538787 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.86s 2025-05-19 22:18:18.538797 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.77s 2025-05-19 22:18:18.538806 | orchestrator | grafana : Prune templated Grafana dashboards ---------------------------- 0.77s 2025-05-19 22:18:18.538816 | orchestrator | grafana : Flush handlers ------------------------------------------------ 0.77s 2025-05-19 22:18:18.538825 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.75s 2025-05-19 22:18:18.538835 | orchestrator | 2025-05-19 22:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:21.583745 | orchestrator | 2025-05-19 22:18:21 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:21.583866 | orchestrator | 2025-05-19 22:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:24.627163 | orchestrator | 2025-05-19 22:18:24 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:24.627273 | orchestrator | 2025-05-19 22:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:27.675773 | orchestrator | 2025-05-19 22:18:27 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:27.675886 | orchestrator | 2025-05-19 22:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:30.725337 | orchestrator | 2025-05-19 22:18:30 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:30.725462 | orchestrator | 2025-05-19 22:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:33.776364 | orchestrator | 2025-05-19 22:18:33 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:33.776482 | orchestrator | 2025-05-19 22:18:33 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:36.828339 | orchestrator | 2025-05-19 22:18:36 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:36.828452 | orchestrator | 2025-05-19 22:18:36 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:39.877941 | orchestrator | 2025-05-19 22:18:39 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:39.878103 | orchestrator | 2025-05-19 22:18:39 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:42.942149 | orchestrator | 2025-05-19 22:18:42 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:42.942263 | orchestrator | 2025-05-19 22:18:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:45.986652 | orchestrator | 2025-05-19 22:18:45 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:45.986788 | orchestrator | 2025-05-19 22:18:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:49.049474 | orchestrator | 2025-05-19 22:18:49 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:49.049607 | orchestrator | 2025-05-19 22:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:52.093779 | orchestrator | 2025-05-19 22:18:52 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:52.093898 | orchestrator | 2025-05-19 22:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:55.146169 | orchestrator | 2025-05-19 22:18:55 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:55.146306 | orchestrator | 2025-05-19 22:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:18:58.198165 | orchestrator | 2025-05-19 22:18:58 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:18:58.228154 | orchestrator | 2025-05-19 22:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:01.245108 | orchestrator | 2025-05-19 22:19:01 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:01.247123 | orchestrator | 2025-05-19 22:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:04.292082 | orchestrator | 2025-05-19 22:19:04 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:04.292213 | orchestrator | 2025-05-19 22:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:07.356185 | orchestrator | 2025-05-19 22:19:07 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:07.357008 | orchestrator | 2025-05-19 22:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:10.417223 | orchestrator | 2025-05-19 22:19:10 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:10.417316 | orchestrator | 2025-05-19 22:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:13.461074 | orchestrator | 2025-05-19 22:19:13 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:13.461178 | orchestrator | 2025-05-19 22:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:16.539447 | orchestrator | 2025-05-19 22:19:16 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:16.539568 | orchestrator | 2025-05-19 22:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:19.589888 | orchestrator | 2025-05-19 22:19:19 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:19.590088 | orchestrator | 2025-05-19 22:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:22.649289 | orchestrator | 2025-05-19 22:19:22 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:22.650253 | orchestrator | 2025-05-19 22:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:25.690881 | orchestrator | 2025-05-19 22:19:25 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:25.691038 | orchestrator | 2025-05-19 22:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:28.740623 | orchestrator | 2025-05-19 22:19:28 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:28.740708 | orchestrator | 2025-05-19 22:19:28 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:31.790412 | orchestrator | 2025-05-19 22:19:31 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:31.790539 | orchestrator | 2025-05-19 22:19:31 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:34.845720 | orchestrator | 2025-05-19 22:19:34 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:34.845866 | orchestrator | 2025-05-19 22:19:34 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:37.890010 | orchestrator | 2025-05-19 22:19:37 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:37.890188 | orchestrator | 2025-05-19 22:19:37 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:40.939588 | orchestrator | 2025-05-19 22:19:40 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:40.939735 | orchestrator | 2025-05-19 22:19:40 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:43.990687 | orchestrator | 2025-05-19 22:19:43 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:43.990804 | orchestrator | 2025-05-19 22:19:43 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:47.042342 | orchestrator | 2025-05-19 22:19:47 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:47.042459 | orchestrator | 2025-05-19 22:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:50.091994 | orchestrator | 2025-05-19 22:19:50 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:50.092068 | orchestrator | 2025-05-19 22:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:53.146377 | orchestrator | 2025-05-19 22:19:53 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:53.146502 | orchestrator | 2025-05-19 22:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:56.197538 | orchestrator | 2025-05-19 22:19:56 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:56.197668 | orchestrator | 2025-05-19 22:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:19:59.250995 | orchestrator | 2025-05-19 22:19:59 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:19:59.251115 | orchestrator | 2025-05-19 22:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:02.295654 | orchestrator | 2025-05-19 22:20:02 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:02.295780 | orchestrator | 2025-05-19 22:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:05.347548 | orchestrator | 2025-05-19 22:20:05 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:05.347695 | orchestrator | 2025-05-19 22:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:08.397698 | orchestrator | 2025-05-19 22:20:08 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:08.397968 | orchestrator | 2025-05-19 22:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:11.449964 | orchestrator | 2025-05-19 22:20:11 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:11.450154 | orchestrator | 2025-05-19 22:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:14.496922 | orchestrator | 2025-05-19 22:20:14 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:14.497500 | orchestrator | 2025-05-19 22:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:17.546486 | orchestrator | 2025-05-19 22:20:17 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:17.546612 | orchestrator | 2025-05-19 22:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:20.603889 | orchestrator | 2025-05-19 22:20:20 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:20.603969 | orchestrator | 2025-05-19 22:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:23.656925 | orchestrator | 2025-05-19 22:20:23 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:23.657046 | orchestrator | 2025-05-19 22:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:26.708605 | orchestrator | 2025-05-19 22:20:26 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:26.708738 | orchestrator | 2025-05-19 22:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:29.762957 | orchestrator | 2025-05-19 22:20:29 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:29.763073 | orchestrator | 2025-05-19 22:20:29 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:32.820765 | orchestrator | 2025-05-19 22:20:32 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:32.820944 | orchestrator | 2025-05-19 22:20:32 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:35.862316 | orchestrator | 2025-05-19 22:20:35 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:35.862439 | orchestrator | 2025-05-19 22:20:35 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:38.935085 | orchestrator | 2025-05-19 22:20:38 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:38.935199 | orchestrator | 2025-05-19 22:20:38 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:42.004377 | orchestrator | 2025-05-19 22:20:42 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:42.004514 | orchestrator | 2025-05-19 22:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:45.061902 | orchestrator | 2025-05-19 22:20:45 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:45.062013 | orchestrator | 2025-05-19 22:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:48.122203 | orchestrator | 2025-05-19 22:20:48 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:48.122340 | orchestrator | 2025-05-19 22:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:51.187572 | orchestrator | 2025-05-19 22:20:51 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:51.187691 | orchestrator | 2025-05-19 22:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:54.231490 | orchestrator | 2025-05-19 22:20:54 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:54.231636 | orchestrator | 2025-05-19 22:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:20:57.297357 | orchestrator | 2025-05-19 22:20:57 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:20:57.297486 | orchestrator | 2025-05-19 22:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:21:00.359133 | orchestrator | 2025-05-19 22:21:00 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:21:00.359254 | orchestrator | 2025-05-19 22:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:21:03.409736 | orchestrator | 2025-05-19 22:21:03 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state STARTED 2025-05-19 22:21:03.409889 | orchestrator | 2025-05-19 22:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:21:06.459314 | orchestrator | 2025-05-19 22:21:06 | INFO  | Task 9ee8f5b7-f086-4aae-b2a6-b9fd7324f988 is in state SUCCESS 2025-05-19 22:21:06.460488 | orchestrator | 2025-05-19 22:21:06.460534 | orchestrator | 2025-05-19 22:21:06.460547 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:21:06.460559 | orchestrator | 2025-05-19 22:21:06.460570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:21:06.460583 | orchestrator | Monday 19 May 2025 22:16:19 +0000 (0:00:00.273) 0:00:00.273 ************ 2025-05-19 22:21:06.460594 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.460630 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:21:06.460641 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:21:06.460652 | orchestrator | 2025-05-19 22:21:06.460663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:21:06.460674 | orchestrator | Monday 19 May 2025 22:16:20 +0000 (0:00:00.316) 0:00:00.590 ************ 2025-05-19 22:21:06.460685 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-19 22:21:06.460696 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-19 22:21:06.460707 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-19 22:21:06.460718 | orchestrator | 2025-05-19 22:21:06.460728 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-19 22:21:06.460739 | orchestrator | 2025-05-19 22:21:06.460750 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.460761 | orchestrator | Monday 19 May 2025 22:16:20 +0000 (0:00:00.467) 0:00:01.058 ************ 2025-05-19 22:21:06.460772 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:21:06.460784 | orchestrator | 2025-05-19 22:21:06.460795 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-19 22:21:06.460832 | orchestrator | Monday 19 May 2025 22:16:21 +0000 (0:00:00.607) 0:00:01.666 ************ 2025-05-19 22:21:06.460844 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-19 22:21:06.460855 | orchestrator | 2025-05-19 22:21:06.460866 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-19 22:21:06.460877 | orchestrator | Monday 19 May 2025 22:16:24 +0000 (0:00:03.099) 0:00:04.766 ************ 2025-05-19 22:21:06.460888 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-19 22:21:06.460899 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-19 22:21:06.460910 | orchestrator | 2025-05-19 22:21:06.460921 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-19 22:21:06.460932 | orchestrator | Monday 19 May 2025 22:16:31 +0000 (0:00:06.527) 0:00:11.294 ************ 2025-05-19 22:21:06.460943 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-19 22:21:06.460954 | orchestrator | 2025-05-19 22:21:06.460965 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-19 22:21:06.460977 | orchestrator | Monday 19 May 2025 22:16:34 +0000 (0:00:03.083) 0:00:14.377 ************ 2025-05-19 22:21:06.460987 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-19 22:21:06.460998 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 22:21:06.461009 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-19 22:21:06.461020 | orchestrator | 2025-05-19 22:21:06.461031 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-19 22:21:06.461042 | orchestrator | Monday 19 May 2025 22:16:41 +0000 (0:00:07.839) 0:00:22.217 ************ 2025-05-19 22:21:06.461719 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-19 22:21:06.461754 | orchestrator | 2025-05-19 22:21:06.461798 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-19 22:21:06.461854 | orchestrator | Monday 19 May 2025 22:16:45 +0000 (0:00:03.641) 0:00:25.858 ************ 2025-05-19 22:21:06.461866 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 22:21:06.461878 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-19 22:21:06.461888 | orchestrator | 2025-05-19 22:21:06.461900 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-19 22:21:06.461982 | orchestrator | Monday 19 May 2025 22:16:52 +0000 (0:00:07.039) 0:00:32.897 ************ 2025-05-19 22:21:06.461994 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-19 22:21:06.462006 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-19 22:21:06.462918 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-19 22:21:06.462944 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-19 22:21:06.462955 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-19 22:21:06.462966 | orchestrator | 2025-05-19 22:21:06.462977 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.462988 | orchestrator | Monday 19 May 2025 22:17:08 +0000 (0:00:15.478) 0:00:48.376 ************ 2025-05-19 22:21:06.462999 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:21:06.463011 | orchestrator | 2025-05-19 22:21:06.463022 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-19 22:21:06.463033 | orchestrator | Monday 19 May 2025 22:17:08 +0000 (0:00:00.454) 0:00:48.830 ************ 2025-05-19 22:21:06.463044 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463055 | orchestrator | 2025-05-19 22:21:06.463065 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-05-19 22:21:06.463077 | orchestrator | Monday 19 May 2025 22:17:13 +0000 (0:00:04.740) 0:00:53.570 ************ 2025-05-19 22:21:06.463088 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463099 | orchestrator | 2025-05-19 22:21:06.463110 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-19 22:21:06.463165 | orchestrator | Monday 19 May 2025 22:17:16 +0000 (0:00:03.521) 0:00:57.091 ************ 2025-05-19 22:21:06.463178 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.463189 | orchestrator | 2025-05-19 22:21:06.463200 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-05-19 22:21:06.463211 | orchestrator | Monday 19 May 2025 22:17:19 +0000 (0:00:03.021) 0:01:00.113 ************ 2025-05-19 22:21:06.463221 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-19 22:21:06.463232 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-19 22:21:06.463243 | orchestrator | 2025-05-19 22:21:06.463254 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-05-19 22:21:06.463264 | orchestrator | Monday 19 May 2025 22:17:30 +0000 (0:00:10.596) 0:01:10.709 ************ 2025-05-19 22:21:06.463275 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-05-19 22:21:06.463286 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-05-19 22:21:06.463298 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-05-19 22:21:06.463310 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-05-19 22:21:06.463321 | orchestrator | 2025-05-19 22:21:06.463331 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-05-19 22:21:06.463342 | orchestrator | Monday 19 May 2025 22:17:45 +0000 (0:00:15.101) 0:01:25.811 ************ 2025-05-19 22:21:06.463353 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463364 | orchestrator | 2025-05-19 22:21:06.463374 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-05-19 22:21:06.463385 | orchestrator | Monday 19 May 2025 22:17:50 +0000 (0:00:04.947) 0:01:30.758 ************ 2025-05-19 22:21:06.463421 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463433 | orchestrator | 2025-05-19 22:21:06.463443 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-05-19 22:21:06.463454 | orchestrator | Monday 19 May 2025 22:17:55 +0000 (0:00:05.045) 0:01:35.804 ************ 2025-05-19 22:21:06.463465 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.463476 | orchestrator | 2025-05-19 22:21:06.463487 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-05-19 22:21:06.463506 | orchestrator | Monday 19 May 2025 22:17:55 +0000 (0:00:00.216) 0:01:36.021 ************ 2025-05-19 22:21:06.463519 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463532 | orchestrator | 2025-05-19 22:21:06.463544 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.463557 | orchestrator | Monday 19 May 2025 22:17:59 +0000 (0:00:04.101) 0:01:40.122 ************ 2025-05-19 22:21:06.463569 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:21:06.463581 | orchestrator | 2025-05-19 22:21:06.463593 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-05-19 22:21:06.463605 | orchestrator | Monday 19 May 2025 22:18:01 +0000 (0:00:01.360) 0:01:41.483 ************ 2025-05-19 22:21:06.463617 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.463629 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463649 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.463663 | orchestrator | 2025-05-19 22:21:06.463675 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-05-19 22:21:06.463687 | orchestrator | Monday 19 May 2025 22:18:06 +0000 (0:00:05.716) 0:01:47.199 ************ 2025-05-19 22:21:06.463700 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.463712 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463724 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.463736 | orchestrator | 2025-05-19 22:21:06.463748 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-05-19 22:21:06.463760 | orchestrator | Monday 19 May 2025 22:18:11 +0000 (0:00:04.527) 0:01:51.726 ************ 2025-05-19 22:21:06.463773 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.463785 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.463797 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.463866 | orchestrator | 2025-05-19 22:21:06.463880 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-05-19 22:21:06.463894 | orchestrator | Monday 19 May 2025 22:18:12 +0000 (0:00:00.743) 0:01:52.470 ************ 2025-05-19 22:21:06.463904 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:21:06.463915 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.463926 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:21:06.463937 | orchestrator | 2025-05-19 22:21:06.463947 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-05-19 22:21:06.463958 | orchestrator | Monday 19 May 2025 22:18:14 +0000 (0:00:01.895) 0:01:54.365 ************ 2025-05-19 22:21:06.463969 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.463980 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.463990 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.464001 | orchestrator | 2025-05-19 22:21:06.464012 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-05-19 22:21:06.464023 | orchestrator | Monday 19 May 2025 22:18:15 +0000 (0:00:01.262) 0:01:55.627 ************ 2025-05-19 22:21:06.464034 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.464044 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.464055 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.464066 | orchestrator | 2025-05-19 22:21:06.464077 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-05-19 22:21:06.464088 | orchestrator | Monday 19 May 2025 22:18:16 +0000 (0:00:01.191) 0:01:56.819 ************ 2025-05-19 22:21:06.464099 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.464109 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.464120 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.464131 | orchestrator | 2025-05-19 22:21:06.464179 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-05-19 22:21:06.464192 | orchestrator | Monday 19 May 2025 22:18:18 +0000 (0:00:01.963) 0:01:58.782 ************ 2025-05-19 22:21:06.464203 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.464223 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.464233 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.464244 | orchestrator | 2025-05-19 22:21:06.464255 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-05-19 22:21:06.464267 | orchestrator | Monday 19 May 2025 22:18:20 +0000 (0:00:01.792) 0:02:00.575 ************ 2025-05-19 22:21:06.464277 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464288 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:21:06.464299 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:21:06.464310 | orchestrator | 2025-05-19 22:21:06.464321 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-05-19 22:21:06.464332 | orchestrator | Monday 19 May 2025 22:18:20 +0000 (0:00:00.613) 0:02:01.188 ************ 2025-05-19 22:21:06.464342 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:21:06.464353 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464364 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:21:06.464375 | orchestrator | 2025-05-19 22:21:06.464386 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.464395 | orchestrator | Monday 19 May 2025 22:18:23 +0000 (0:00:02.853) 0:02:04.041 ************ 2025-05-19 22:21:06.464405 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:21:06.464415 | orchestrator | 2025-05-19 22:21:06.464424 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-05-19 22:21:06.464434 | orchestrator | Monday 19 May 2025 22:18:24 +0000 (0:00:00.793) 0:02:04.835 ************ 2025-05-19 22:21:06.464444 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464453 | orchestrator | 2025-05-19 22:21:06.464463 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-05-19 22:21:06.464472 | orchestrator | Monday 19 May 2025 22:18:28 +0000 (0:00:03.648) 0:02:08.484 ************ 2025-05-19 22:21:06.464482 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464492 | orchestrator | 2025-05-19 22:21:06.464501 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-05-19 22:21:06.464511 | orchestrator | Monday 19 May 2025 22:18:31 +0000 (0:00:03.113) 0:02:11.597 ************ 2025-05-19 22:21:06.464521 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-05-19 22:21:06.464531 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-05-19 22:21:06.464540 | orchestrator | 2025-05-19 22:21:06.464550 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-05-19 22:21:06.464559 | orchestrator | Monday 19 May 2025 22:18:37 +0000 (0:00:06.382) 0:02:17.979 ************ 2025-05-19 22:21:06.464569 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464578 | orchestrator | 2025-05-19 22:21:06.464588 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-05-19 22:21:06.464598 | orchestrator | Monday 19 May 2025 22:18:40 +0000 (0:00:03.186) 0:02:21.166 ************ 2025-05-19 22:21:06.464607 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:21:06.464617 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:21:06.464627 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:21:06.464636 | orchestrator | 2025-05-19 22:21:06.464646 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-05-19 22:21:06.464655 | orchestrator | Monday 19 May 2025 22:18:41 +0000 (0:00:00.359) 0:02:21.525 ************ 2025-05-19 22:21:06.464673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.464722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.464735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.464746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.464757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.464774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.464867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.464952 | orchestrator | 2025-05-19 22:21:06.464962 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-05-19 22:21:06.464972 | orchestrator | Monday 19 May 2025 22:18:43 +0000 (0:00:02.665) 0:02:24.191 ************ 2025-05-19 22:21:06.464982 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.464992 | orchestrator | 2025-05-19 22:21:06.465027 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-05-19 22:21:06.465038 | orchestrator | Monday 19 May 2025 22:18:44 +0000 (0:00:00.369) 0:02:24.560 ************ 2025-05-19 22:21:06.465048 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.465058 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:21:06.465067 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:21:06.465076 | orchestrator | 2025-05-19 22:21:06.465086 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-05-19 22:21:06.465096 | orchestrator | Monday 19 May 2025 22:18:44 +0000 (0:00:00.310) 0:02:24.871 ************ 2025-05-19 22:21:06.465106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.465131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.465168 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.465205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.465227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.465269 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:21:06.465280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.465330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.465372 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:21:06.465382 | orchestrator | 2025-05-19 22:21:06.465392 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.465402 | orchestrator | Monday 19 May 2025 22:18:45 +0000 (0:00:00.751) 0:02:25.622 ************ 2025-05-19 22:21:06.465412 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:21:06.465422 | orchestrator | 2025-05-19 22:21:06.465436 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-05-19 22:21:06.465446 | orchestrator | Monday 19 May 2025 22:18:45 +0000 (0:00:00.574) 0:02:26.196 ************ 2025-05-19 22:21:06.465456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.465495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.465507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.465518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.465533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.465548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.465558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.465776 | orchestrator | 2025-05-19 22:21:06.465786 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-05-19 22:21:06.465796 | orchestrator | Monday 19 May 2025 22:18:51 +0000 (0:00:05.190) 0:02:31.386 ************ 2025-05-19 22:21:06.465825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.465854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.465889 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.465908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.465934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.465958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.465968 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:21:06.465978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.465994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.466004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.466096 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:21:06.466106 | orchestrator | 2025-05-19 22:21:06.466116 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-05-19 22:21:06.466126 | orchestrator | Monday 19 May 2025 22:18:51 +0000 (0:00:00.682) 0:02:32.069 ************ 2025-05-19 22:21:06.466141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.466151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.466161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.466206 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.466216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.466231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.466241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.466283 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:21:06.466293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-19 22:21:06.466304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-19 22:21:06.466318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-19 22:21:06.466338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-19 22:21:06.466348 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:21:06.466358 | orchestrator | 2025-05-19 22:21:06.466368 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-05-19 22:21:06.466377 | orchestrator | Monday 19 May 2025 22:18:52 +0000 (0:00:00.956) 0:02:33.025 ************ 2025-05-19 22:21:06.466400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466589 | orchestrator | 2025-05-19 22:21:06.466599 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-05-19 22:21:06.466609 | orchestrator | Monday 19 May 2025 22:18:58 +0000 (0:00:05.512) 0:02:38.538 ************ 2025-05-19 22:21:06.466619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 22:21:06.466629 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 22:21:06.466639 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-05-19 22:21:06.466648 | orchestrator | 2025-05-19 22:21:06.466658 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-05-19 22:21:06.466668 | orchestrator | Monday 19 May 2025 22:18:59 +0000 (0:00:01.708) 0:02:40.246 ************ 2025-05-19 22:21:06.466683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.466727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.466762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.466932 | orchestrator | 2025-05-19 22:21:06.466941 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-05-19 22:21:06.466951 | orchestrator | Monday 19 May 2025 22:19:17 +0000 (0:00:17.577) 0:02:57.823 ************ 2025-05-19 22:21:06.466961 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.466971 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.466981 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.466990 | orchestrator | 2025-05-19 22:21:06.467000 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-05-19 22:21:06.467009 | orchestrator | Monday 19 May 2025 22:19:19 +0000 (0:00:01.508) 0:02:59.332 ************ 2025-05-19 22:21:06.467019 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467029 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467044 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467054 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467064 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467074 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467083 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467093 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467102 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467112 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467121 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467130 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467140 | orchestrator | 2025-05-19 22:21:06.467149 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-05-19 22:21:06.467159 | orchestrator | Monday 19 May 2025 22:19:24 +0000 (0:00:05.615) 0:03:04.947 ************ 2025-05-19 22:21:06.467168 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467178 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467187 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467197 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467207 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467216 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467226 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467235 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467245 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467254 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467264 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467273 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467283 | orchestrator | 2025-05-19 22:21:06.467297 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-05-19 22:21:06.467307 | orchestrator | Monday 19 May 2025 22:19:29 +0000 (0:00:04.985) 0:03:09.932 ************ 2025-05-19 22:21:06.467317 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467326 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467336 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-05-19 22:21:06.467345 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467355 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467364 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-05-19 22:21:06.467374 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467382 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467390 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-05-19 22:21:06.467401 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467409 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467416 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-05-19 22:21:06.467424 | orchestrator | 2025-05-19 22:21:06.467432 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-05-19 22:21:06.467440 | orchestrator | Monday 19 May 2025 22:19:34 +0000 (0:00:05.048) 0:03:14.981 ************ 2025-05-19 22:21:06.467448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.467462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.467471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-19 22:21:06.467484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.467496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.467505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-05-19 22:21:06.467513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-05-19 22:21:06.467607 | orchestrator | 2025-05-19 22:21:06.467615 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-19 22:21:06.467623 | orchestrator | Monday 19 May 2025 22:19:38 +0000 (0:00:03.477) 0:03:18.458 ************ 2025-05-19 22:21:06.467631 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:21:06.467639 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:21:06.467647 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:21:06.467655 | orchestrator | 2025-05-19 22:21:06.467668 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-05-19 22:21:06.467676 | orchestrator | Monday 19 May 2025 22:19:38 +0000 (0:00:00.293) 0:03:18.752 ************ 2025-05-19 22:21:06.467683 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467691 | orchestrator | 2025-05-19 22:21:06.467699 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-05-19 22:21:06.467707 | orchestrator | Monday 19 May 2025 22:19:40 +0000 (0:00:01.916) 0:03:20.668 ************ 2025-05-19 22:21:06.467715 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467722 | orchestrator | 2025-05-19 22:21:06.467730 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-05-19 22:21:06.467738 | orchestrator | Monday 19 May 2025 22:19:42 +0000 (0:00:02.386) 0:03:23.055 ************ 2025-05-19 22:21:06.467746 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467754 | orchestrator | 2025-05-19 22:21:06.467762 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-05-19 22:21:06.467770 | orchestrator | Monday 19 May 2025 22:19:44 +0000 (0:00:01.943) 0:03:24.999 ************ 2025-05-19 22:21:06.467777 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467785 | orchestrator | 2025-05-19 22:21:06.467793 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-05-19 22:21:06.467801 | orchestrator | Monday 19 May 2025 22:19:46 +0000 (0:00:02.080) 0:03:27.079 ************ 2025-05-19 22:21:06.467824 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467832 | orchestrator | 2025-05-19 22:21:06.467840 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 22:21:06.467848 | orchestrator | Monday 19 May 2025 22:20:06 +0000 (0:00:19.621) 0:03:46.701 ************ 2025-05-19 22:21:06.467856 | orchestrator | 2025-05-19 22:21:06.467863 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 22:21:06.467872 | orchestrator | Monday 19 May 2025 22:20:06 +0000 (0:00:00.069) 0:03:46.770 ************ 2025-05-19 22:21:06.467879 | orchestrator | 2025-05-19 22:21:06.467887 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-05-19 22:21:06.467895 | orchestrator | Monday 19 May 2025 22:20:06 +0000 (0:00:00.063) 0:03:46.833 ************ 2025-05-19 22:21:06.467903 | orchestrator | 2025-05-19 22:21:06.467911 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-05-19 22:21:06.467919 | orchestrator | Monday 19 May 2025 22:20:06 +0000 (0:00:00.068) 0:03:46.902 ************ 2025-05-19 22:21:06.467927 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467935 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.467943 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.467951 | orchestrator | 2025-05-19 22:21:06.467959 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-05-19 22:21:06.467967 | orchestrator | Monday 19 May 2025 22:20:23 +0000 (0:00:17.167) 0:04:04.069 ************ 2025-05-19 22:21:06.467974 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.467988 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.467996 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.468004 | orchestrator | 2025-05-19 22:21:06.468012 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-05-19 22:21:06.468020 | orchestrator | Monday 19 May 2025 22:20:35 +0000 (0:00:11.494) 0:04:15.563 ************ 2025-05-19 22:21:06.468028 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.468036 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.468044 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.468051 | orchestrator | 2025-05-19 22:21:06.468059 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-05-19 22:21:06.468067 | orchestrator | Monday 19 May 2025 22:20:45 +0000 (0:00:10.684) 0:04:26.248 ************ 2025-05-19 22:21:06.468075 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.468083 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.468091 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.468103 | orchestrator | 2025-05-19 22:21:06.468111 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-05-19 22:21:06.468119 | orchestrator | Monday 19 May 2025 22:20:54 +0000 (0:00:08.107) 0:04:34.356 ************ 2025-05-19 22:21:06.468127 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:21:06.468135 | orchestrator | changed: [testbed-node-1] 2025-05-19 22:21:06.468143 | orchestrator | changed: [testbed-node-2] 2025-05-19 22:21:06.468150 | orchestrator | 2025-05-19 22:21:06.468158 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:21:06.468166 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-19 22:21:06.468175 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:21:06.468183 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-19 22:21:06.468191 | orchestrator | 2025-05-19 22:21:06.468199 | orchestrator | 2025-05-19 22:21:06.468207 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:21:06.468215 | orchestrator | Monday 19 May 2025 22:21:04 +0000 (0:00:10.781) 0:04:45.137 ************ 2025-05-19 22:21:06.468227 | orchestrator | =============================================================================== 2025-05-19 22:21:06.468235 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.62s 2025-05-19 22:21:06.468243 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 17.58s 2025-05-19 22:21:06.468251 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.17s 2025-05-19 22:21:06.468259 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.48s 2025-05-19 22:21:06.468266 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.10s 2025-05-19 22:21:06.468274 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.49s 2025-05-19 22:21:06.468282 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.78s 2025-05-19 22:21:06.468290 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.69s 2025-05-19 22:21:06.468297 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.60s 2025-05-19 22:21:06.468305 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.11s 2025-05-19 22:21:06.468313 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.84s 2025-05-19 22:21:06.468321 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.04s 2025-05-19 22:21:06.468328 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.53s 2025-05-19 22:21:06.468336 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.38s 2025-05-19 22:21:06.468344 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.72s 2025-05-19 22:21:06.468352 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.62s 2025-05-19 22:21:06.468359 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.51s 2025-05-19 22:21:06.468367 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.19s 2025-05-19 22:21:06.468375 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.05s 2025-05-19 22:21:06.468383 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.05s 2025-05-19 22:21:06.468391 | orchestrator | 2025-05-19 22:21:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:09.507200 | orchestrator | 2025-05-19 22:21:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:12.556373 | orchestrator | 2025-05-19 22:21:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:15.606687 | orchestrator | 2025-05-19 22:21:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:18.658609 | orchestrator | 2025-05-19 22:21:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:21.716691 | orchestrator | 2025-05-19 22:21:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:24.779029 | orchestrator | 2025-05-19 22:21:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:27.828658 | orchestrator | 2025-05-19 22:21:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:30.872986 | orchestrator | 2025-05-19 22:21:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:33.922741 | orchestrator | 2025-05-19 22:21:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:36.970454 | orchestrator | 2025-05-19 22:21:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:40.014091 | orchestrator | 2025-05-19 22:21:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:43.058877 | orchestrator | 2025-05-19 22:21:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:46.105750 | orchestrator | 2025-05-19 22:21:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:49.154368 | orchestrator | 2025-05-19 22:21:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:52.197729 | orchestrator | 2025-05-19 22:21:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:21:55.241277 | orchestrator | 2025-05-19 22:21:55 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:21:55.241385 | orchestrator | 2025-05-19 22:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:21:58.295443 | orchestrator | 2025-05-19 22:21:58 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:21:58.295531 | orchestrator | 2025-05-19 22:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:01.358165 | orchestrator | 2025-05-19 22:22:01 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:22:01.358292 | orchestrator | 2025-05-19 22:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:04.409115 | orchestrator | 2025-05-19 22:22:04 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:22:04.409232 | orchestrator | 2025-05-19 22:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:07.472525 | orchestrator | 2025-05-19 22:22:07 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:22:07.472716 | orchestrator | 2025-05-19 22:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:10.527971 | orchestrator | 2025-05-19 22:22:10 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:22:10.528083 | orchestrator | 2025-05-19 22:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:13.581030 | orchestrator | 2025-05-19 22:22:13 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state STARTED 2025-05-19 22:22:13.581146 | orchestrator | 2025-05-19 22:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-19 22:22:16.633456 | orchestrator | 2025-05-19 22:22:16 | INFO  | Task 0af96bb9-d947-4e24-86b5-e17ebf44b95b is in state SUCCESS 2025-05-19 22:22:16.633544 | orchestrator | 2025-05-19 22:22:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:22:19.686275 | orchestrator | 2025-05-19 22:22:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:22:22.728729 | orchestrator | 2025-05-19 22:22:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:22:25.785213 | orchestrator | 2025-05-19 22:22:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-05-19 22:22:28.846692 | orchestrator | 2025-05-19 22:22:28.846812 | orchestrator | None 2025-05-19 22:22:29.154538 | orchestrator | 2025-05-19 22:22:29.158508 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon May 19 22:22:29 UTC 2025 2025-05-19 22:22:29.158560 | orchestrator | 2025-05-19 22:22:29.594172 | orchestrator | ok: Runtime: 0:34:39.084478 2025-05-19 22:22:29.857006 | 2025-05-19 22:22:29.857156 | TASK [Bootstrap services] 2025-05-19 22:22:30.586960 | orchestrator | 2025-05-19 22:22:30.587202 | orchestrator | # BOOTSTRAP 2025-05-19 22:22:30.587240 | orchestrator | 2025-05-19 22:22:30.587265 | orchestrator | + set -e 2025-05-19 22:22:30.587288 | orchestrator | + echo 2025-05-19 22:22:30.587312 | orchestrator | + echo '# BOOTSTRAP' 2025-05-19 22:22:30.587341 | orchestrator | + echo 2025-05-19 22:22:30.587403 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-05-19 22:22:30.595482 | orchestrator | + set -e 2025-05-19 22:22:30.595590 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-05-19 22:22:32.481287 | orchestrator | 2025-05-19 22:22:32 | INFO  | It takes a moment until task c0c7f011-f64e-443c-83a6-3afb58a65b07 (flavor-manager) has been started and output is visible here. 2025-05-19 22:22:37.066009 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-1V-4 created 2025-05-19 22:22:37.229993 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-2V-8 created 2025-05-19 22:22:37.408487 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-4V-16 created 2025-05-19 22:22:37.580370 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-8V-32 created 2025-05-19 22:22:37.715838 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-1V-2 created 2025-05-19 22:22:37.857249 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-2V-4 created 2025-05-19 22:22:37.994137 | orchestrator | 2025-05-19 22:22:37 | INFO  | Flavor SCS-4V-8 created 2025-05-19 22:22:38.119676 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-8V-16 created 2025-05-19 22:22:38.245401 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-16V-32 created 2025-05-19 22:22:38.366542 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-1V-8 created 2025-05-19 22:22:38.516411 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-2V-16 created 2025-05-19 22:22:38.653097 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-4V-32 created 2025-05-19 22:22:38.792924 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-1L-1 created 2025-05-19 22:22:38.917583 | orchestrator | 2025-05-19 22:22:38 | INFO  | Flavor SCS-2V-4-20s created 2025-05-19 22:22:39.062420 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-4V-16-100s created 2025-05-19 22:22:39.214274 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-1V-4-10 created 2025-05-19 22:22:39.353935 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-2V-8-20 created 2025-05-19 22:22:39.497686 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-4V-16-50 created 2025-05-19 22:22:39.640564 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-8V-32-100 created 2025-05-19 22:22:39.756708 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-1V-2-5 created 2025-05-19 22:22:39.908028 | orchestrator | 2025-05-19 22:22:39 | INFO  | Flavor SCS-2V-4-10 created 2025-05-19 22:22:40.043492 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-4V-8-20 created 2025-05-19 22:22:40.180974 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-8V-16-50 created 2025-05-19 22:22:40.327459 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-16V-32-100 created 2025-05-19 22:22:40.471509 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-1V-8-20 created 2025-05-19 22:22:40.592514 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-2V-16-50 created 2025-05-19 22:22:40.724141 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-4V-32-100 created 2025-05-19 22:22:40.850796 | orchestrator | 2025-05-19 22:22:40 | INFO  | Flavor SCS-1L-1-5 created 2025-05-19 22:22:43.081133 | orchestrator | 2025-05-19 22:22:43 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-05-19 22:22:43.141657 | orchestrator | 2025-05-19 22:22:43 | INFO  | Task a98acb3e-a209-44c6-96ec-7f8133451028 (bootstrap-basic) was prepared for execution. 2025-05-19 22:22:43.141864 | orchestrator | 2025-05-19 22:22:43 | INFO  | It takes a moment until task a98acb3e-a209-44c6-96ec-7f8133451028 (bootstrap-basic) has been started and output is visible here. 2025-05-19 22:22:47.185995 | orchestrator | 2025-05-19 22:22:47.186132 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-05-19 22:22:47.186142 | orchestrator | 2025-05-19 22:22:47.186623 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-19 22:22:47.187298 | orchestrator | Monday 19 May 2025 22:22:47 +0000 (0:00:00.085) 0:00:00.085 ************ 2025-05-19 22:22:49.142990 | orchestrator | ok: [localhost] 2025-05-19 22:22:49.144646 | orchestrator | 2025-05-19 22:22:49.145082 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-05-19 22:22:49.145899 | orchestrator | Monday 19 May 2025 22:22:49 +0000 (0:00:01.961) 0:00:02.047 ************ 2025-05-19 22:22:59.032928 | orchestrator | ok: [localhost] 2025-05-19 22:22:59.034388 | orchestrator | 2025-05-19 22:22:59.035017 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-05-19 22:22:59.036910 | orchestrator | Monday 19 May 2025 22:22:59 +0000 (0:00:09.888) 0:00:11.936 ************ 2025-05-19 22:23:06.640007 | orchestrator | changed: [localhost] 2025-05-19 22:23:06.640121 | orchestrator | 2025-05-19 22:23:06.640138 | orchestrator | TASK [Get volume type local] *************************************************** 2025-05-19 22:23:06.640151 | orchestrator | Monday 19 May 2025 22:23:06 +0000 (0:00:07.606) 0:00:19.542 ************ 2025-05-19 22:23:12.950197 | orchestrator | ok: [localhost] 2025-05-19 22:23:12.951076 | orchestrator | 2025-05-19 22:23:12.951956 | orchestrator | TASK [Create volume type local] ************************************************ 2025-05-19 22:23:12.953789 | orchestrator | Monday 19 May 2025 22:23:12 +0000 (0:00:06.310) 0:00:25.852 ************ 2025-05-19 22:23:19.435768 | orchestrator | changed: [localhost] 2025-05-19 22:23:19.436833 | orchestrator | 2025-05-19 22:23:19.437692 | orchestrator | TASK [Create public network] *************************************************** 2025-05-19 22:23:19.438370 | orchestrator | Monday 19 May 2025 22:23:19 +0000 (0:00:06.485) 0:00:32.337 ************ 2025-05-19 22:23:24.356402 | orchestrator | changed: [localhost] 2025-05-19 22:23:24.356577 | orchestrator | 2025-05-19 22:23:24.357326 | orchestrator | TASK [Set public network to default] ******************************************* 2025-05-19 22:23:24.359908 | orchestrator | Monday 19 May 2025 22:23:24 +0000 (0:00:04.919) 0:00:37.257 ************ 2025-05-19 22:23:30.409182 | orchestrator | changed: [localhost] 2025-05-19 22:23:30.413269 | orchestrator | 2025-05-19 22:23:30.413311 | orchestrator | TASK [Create public subnet] **************************************************** 2025-05-19 22:23:30.413319 | orchestrator | Monday 19 May 2025 22:23:30 +0000 (0:00:06.055) 0:00:43.313 ************ 2025-05-19 22:23:35.037080 | orchestrator | changed: [localhost] 2025-05-19 22:23:35.038339 | orchestrator | 2025-05-19 22:23:35.040170 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-05-19 22:23:35.042309 | orchestrator | Monday 19 May 2025 22:23:35 +0000 (0:00:04.625) 0:00:47.939 ************ 2025-05-19 22:23:38.961838 | orchestrator | changed: [localhost] 2025-05-19 22:23:38.962185 | orchestrator | 2025-05-19 22:23:38.963941 | orchestrator | TASK [Create manager role] ***************************************************** 2025-05-19 22:23:38.965764 | orchestrator | Monday 19 May 2025 22:23:38 +0000 (0:00:03.925) 0:00:51.865 ************ 2025-05-19 22:23:42.586522 | orchestrator | ok: [localhost] 2025-05-19 22:23:42.586721 | orchestrator | 2025-05-19 22:23:42.587267 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:23:42.587993 | orchestrator | 2025-05-19 22:23:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 22:23:42.588113 | orchestrator | 2025-05-19 22:23:42 | INFO  | Please wait and do not abort execution. 2025-05-19 22:23:42.589486 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:23:42.590656 | orchestrator | 2025-05-19 22:23:42.591150 | orchestrator | 2025-05-19 22:23:42.591850 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:23:42.592244 | orchestrator | Monday 19 May 2025 22:23:42 +0000 (0:00:03.624) 0:00:55.489 ************ 2025-05-19 22:23:42.592687 | orchestrator | =============================================================================== 2025-05-19 22:23:42.593263 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.89s 2025-05-19 22:23:42.596840 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.61s 2025-05-19 22:23:42.597931 | orchestrator | Create volume type local ------------------------------------------------ 6.49s 2025-05-19 22:23:42.598316 | orchestrator | Get volume type local --------------------------------------------------- 6.31s 2025-05-19 22:23:42.599121 | orchestrator | Set public network to default ------------------------------------------- 6.06s 2025-05-19 22:23:42.599498 | orchestrator | Create public network --------------------------------------------------- 4.92s 2025-05-19 22:23:42.599920 | orchestrator | Create public subnet ---------------------------------------------------- 4.63s 2025-05-19 22:23:42.600260 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.93s 2025-05-19 22:23:42.600766 | orchestrator | Create manager role ----------------------------------------------------- 3.62s 2025-05-19 22:23:42.601085 | orchestrator | Gathering Facts --------------------------------------------------------- 1.96s 2025-05-19 22:23:44.985143 | orchestrator | 2025-05-19 22:23:44 | INFO  | It takes a moment until task 98c97408-3a36-4d6e-b588-30c024969548 (image-manager) has been started and output is visible here. 2025-05-19 22:23:48.580143 | orchestrator | 2025-05-19 22:23:48 | INFO  | Processing image 'Cirros 0.6.2' 2025-05-19 22:23:48.786718 | orchestrator | 2025-05-19 22:23:48 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-05-19 22:23:48.789903 | orchestrator | 2025-05-19 22:23:48 | INFO  | Importing image Cirros 0.6.2 2025-05-19 22:23:48.790129 | orchestrator | 2025-05-19 22:23:48 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-19 22:23:50.596491 | orchestrator | 2025-05-19 22:23:50 | INFO  | Waiting for image to leave queued state... 2025-05-19 22:23:52.641500 | orchestrator | 2025-05-19 22:23:52 | INFO  | Waiting for import to complete... 2025-05-19 22:24:02.774202 | orchestrator | 2025-05-19 22:24:02 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-05-19 22:24:02.957299 | orchestrator | 2025-05-19 22:24:02 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-05-19 22:24:02.958831 | orchestrator | 2025-05-19 22:24:02 | INFO  | Setting internal_version = 0.6.2 2025-05-19 22:24:02.959376 | orchestrator | 2025-05-19 22:24:02 | INFO  | Setting image_original_user = cirros 2025-05-19 22:24:02.960045 | orchestrator | 2025-05-19 22:24:02 | INFO  | Adding tag os:cirros 2025-05-19 22:24:03.191681 | orchestrator | 2025-05-19 22:24:03 | INFO  | Setting property architecture: x86_64 2025-05-19 22:24:03.456714 | orchestrator | 2025-05-19 22:24:03 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 22:24:03.687706 | orchestrator | 2025-05-19 22:24:03 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 22:24:03.868079 | orchestrator | 2025-05-19 22:24:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 22:24:04.086214 | orchestrator | 2025-05-19 22:24:04 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 22:24:04.286795 | orchestrator | 2025-05-19 22:24:04 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 22:24:04.485159 | orchestrator | 2025-05-19 22:24:04 | INFO  | Setting property os_distro: cirros 2025-05-19 22:24:04.681779 | orchestrator | 2025-05-19 22:24:04 | INFO  | Setting property replace_frequency: never 2025-05-19 22:24:04.921889 | orchestrator | 2025-05-19 22:24:04 | INFO  | Setting property uuid_validity: none 2025-05-19 22:24:05.157224 | orchestrator | 2025-05-19 22:24:05 | INFO  | Setting property provided_until: none 2025-05-19 22:24:05.344945 | orchestrator | 2025-05-19 22:24:05 | INFO  | Setting property image_description: Cirros 2025-05-19 22:24:05.580507 | orchestrator | 2025-05-19 22:24:05 | INFO  | Setting property image_name: Cirros 2025-05-19 22:24:05.791921 | orchestrator | 2025-05-19 22:24:05 | INFO  | Setting property internal_version: 0.6.2 2025-05-19 22:24:05.989523 | orchestrator | 2025-05-19 22:24:05 | INFO  | Setting property image_original_user: cirros 2025-05-19 22:24:06.230245 | orchestrator | 2025-05-19 22:24:06 | INFO  | Setting property os_version: 0.6.2 2025-05-19 22:24:06.433096 | orchestrator | 2025-05-19 22:24:06 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-05-19 22:24:06.655025 | orchestrator | 2025-05-19 22:24:06 | INFO  | Setting property image_build_date: 2023-05-30 2025-05-19 22:24:06.921650 | orchestrator | 2025-05-19 22:24:06 | INFO  | Checking status of 'Cirros 0.6.2' 2025-05-19 22:24:06.921785 | orchestrator | 2025-05-19 22:24:06 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-05-19 22:24:06.922982 | orchestrator | 2025-05-19 22:24:06 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-05-19 22:24:07.146907 | orchestrator | 2025-05-19 22:24:07 | INFO  | Processing image 'Cirros 0.6.3' 2025-05-19 22:24:07.379795 | orchestrator | 2025-05-19 22:24:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-05-19 22:24:07.380044 | orchestrator | 2025-05-19 22:24:07 | INFO  | Importing image Cirros 0.6.3 2025-05-19 22:24:07.381647 | orchestrator | 2025-05-19 22:24:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-19 22:24:08.687756 | orchestrator | 2025-05-19 22:24:08 | INFO  | Waiting for import to complete... 2025-05-19 22:24:18.806830 | orchestrator | 2025-05-19 22:24:18 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-05-19 22:24:19.081844 | orchestrator | 2025-05-19 22:24:19 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-05-19 22:24:19.082309 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting internal_version = 0.6.3 2025-05-19 22:24:19.083047 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting image_original_user = cirros 2025-05-19 22:24:19.083960 | orchestrator | 2025-05-19 22:24:19 | INFO  | Adding tag os:cirros 2025-05-19 22:24:19.289900 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting property architecture: x86_64 2025-05-19 22:24:19.476559 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 22:24:19.784880 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 22:24:19.968232 | orchestrator | 2025-05-19 22:24:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 22:24:20.180778 | orchestrator | 2025-05-19 22:24:20 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 22:24:20.353952 | orchestrator | 2025-05-19 22:24:20 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 22:24:20.551022 | orchestrator | 2025-05-19 22:24:20 | INFO  | Setting property os_distro: cirros 2025-05-19 22:24:20.751300 | orchestrator | 2025-05-19 22:24:20 | INFO  | Setting property replace_frequency: never 2025-05-19 22:24:20.934303 | orchestrator | 2025-05-19 22:24:20 | INFO  | Setting property uuid_validity: none 2025-05-19 22:24:21.131924 | orchestrator | 2025-05-19 22:24:21 | INFO  | Setting property provided_until: none 2025-05-19 22:24:21.339235 | orchestrator | 2025-05-19 22:24:21 | INFO  | Setting property image_description: Cirros 2025-05-19 22:24:21.518593 | orchestrator | 2025-05-19 22:24:21 | INFO  | Setting property image_name: Cirros 2025-05-19 22:24:21.722212 | orchestrator | 2025-05-19 22:24:21 | INFO  | Setting property internal_version: 0.6.3 2025-05-19 22:24:22.113969 | orchestrator | 2025-05-19 22:24:22 | INFO  | Setting property image_original_user: cirros 2025-05-19 22:24:22.313325 | orchestrator | 2025-05-19 22:24:22 | INFO  | Setting property os_version: 0.6.3 2025-05-19 22:24:22.534301 | orchestrator | 2025-05-19 22:24:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-05-19 22:24:22.765216 | orchestrator | 2025-05-19 22:24:22 | INFO  | Setting property image_build_date: 2024-09-26 2025-05-19 22:24:22.948873 | orchestrator | 2025-05-19 22:24:22 | INFO  | Checking status of 'Cirros 0.6.3' 2025-05-19 22:24:22.949103 | orchestrator | 2025-05-19 22:24:22 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-05-19 22:24:22.950524 | orchestrator | 2025-05-19 22:24:22 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-05-19 22:24:24.048864 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-05-19 22:24:26.026242 | orchestrator | 2025-05-19 22:24:26 | INFO  | date: 2025-05-19 2025-05-19 22:24:26.026344 | orchestrator | 2025-05-19 22:24:26 | INFO  | image: octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 22:24:26.026360 | orchestrator | 2025-05-19 22:24:26 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 22:24:26.026394 | orchestrator | 2025-05-19 22:24:26 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2.CHECKSUM 2025-05-19 22:24:26.055944 | orchestrator | 2025-05-19 22:24:26 | INFO  | checksum: 182419243ca6dc3f15969fa524833c630d9964bbf1d84efd76eee941e0be38b4 2025-05-19 22:24:26.133308 | orchestrator | 2025-05-19 22:24:26 | INFO  | It takes a moment until task 05530719-8e1f-4aee-8775-07437fa05053 (image-manager) has been started and output is visible here. 2025-05-19 22:24:28.593061 | orchestrator | 2025-05-19 22:24:28 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 22:24:28.608872 | orchestrator | 2025-05-19 22:24:28 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2: 200 2025-05-19 22:24:28.609496 | orchestrator | 2025-05-19 22:24:28 | INFO  | Importing image OpenStack Octavia Amphora 2025-05-19 2025-05-19 22:24:28.610401 | orchestrator | 2025-05-19 22:24:28 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 22:24:29.678439 | orchestrator | 2025-05-19 22:24:29 | INFO  | Waiting for image to leave queued state... 2025-05-19 22:24:31.717700 | orchestrator | 2025-05-19 22:24:31 | INFO  | Waiting for import to complete... 2025-05-19 22:24:41.828598 | orchestrator | 2025-05-19 22:24:41 | INFO  | Waiting for import to complete... 2025-05-19 22:24:51.923489 | orchestrator | 2025-05-19 22:24:51 | INFO  | Waiting for import to complete... 2025-05-19 22:25:02.040234 | orchestrator | 2025-05-19 22:25:02 | INFO  | Waiting for import to complete... 2025-05-19 22:25:12.144660 | orchestrator | 2025-05-19 22:25:12 | INFO  | Waiting for import to complete... 2025-05-19 22:25:22.279293 | orchestrator | 2025-05-19 22:25:22 | INFO  | Import of 'OpenStack Octavia Amphora 2025-05-19' successfully completed, reloading images 2025-05-19 22:25:22.639528 | orchestrator | 2025-05-19 22:25:22 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 22:25:22.639777 | orchestrator | 2025-05-19 22:25:22 | INFO  | Setting internal_version = 2025-05-19 2025-05-19 22:25:22.640589 | orchestrator | 2025-05-19 22:25:22 | INFO  | Setting image_original_user = ubuntu 2025-05-19 22:25:22.640633 | orchestrator | 2025-05-19 22:25:22 | INFO  | Adding tag amphora 2025-05-19 22:25:22.856548 | orchestrator | 2025-05-19 22:25:22 | INFO  | Adding tag os:ubuntu 2025-05-19 22:25:23.033842 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property architecture: x86_64 2025-05-19 22:25:23.233625 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property hw_disk_bus: scsi 2025-05-19 22:25:23.420422 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property hw_rng_model: virtio 2025-05-19 22:25:23.600526 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-05-19 22:25:23.800499 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property hw_watchdog_action: reset 2025-05-19 22:25:23.965368 | orchestrator | 2025-05-19 22:25:23 | INFO  | Setting property hypervisor_type: qemu 2025-05-19 22:25:24.174282 | orchestrator | 2025-05-19 22:25:24 | INFO  | Setting property os_distro: ubuntu 2025-05-19 22:25:24.373744 | orchestrator | 2025-05-19 22:25:24 | INFO  | Setting property replace_frequency: quarterly 2025-05-19 22:25:24.579236 | orchestrator | 2025-05-19 22:25:24 | INFO  | Setting property uuid_validity: last-1 2025-05-19 22:25:24.759482 | orchestrator | 2025-05-19 22:25:24 | INFO  | Setting property provided_until: none 2025-05-19 22:25:24.982331 | orchestrator | 2025-05-19 22:25:24 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-05-19 22:25:25.155924 | orchestrator | 2025-05-19 22:25:25 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-05-19 22:25:25.345553 | orchestrator | 2025-05-19 22:25:25 | INFO  | Setting property internal_version: 2025-05-19 2025-05-19 22:25:25.561130 | orchestrator | 2025-05-19 22:25:25 | INFO  | Setting property image_original_user: ubuntu 2025-05-19 22:25:25.750129 | orchestrator | 2025-05-19 22:25:25 | INFO  | Setting property os_version: 2025-05-19 2025-05-19 22:25:25.997928 | orchestrator | 2025-05-19 22:25:25 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250519.qcow2 2025-05-19 22:25:26.239930 | orchestrator | 2025-05-19 22:25:26 | INFO  | Setting property image_build_date: 2025-05-19 2025-05-19 22:25:26.426395 | orchestrator | 2025-05-19 22:25:26 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 22:25:26.426551 | orchestrator | 2025-05-19 22:25:26 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-05-19' 2025-05-19 22:25:26.614744 | orchestrator | 2025-05-19 22:25:26 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-05-19 22:25:26.615312 | orchestrator | 2025-05-19 22:25:26 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-05-19 22:25:26.616095 | orchestrator | 2025-05-19 22:25:26 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-05-19 22:25:26.616797 | orchestrator | 2025-05-19 22:25:26 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-05-19 22:25:27.535757 | orchestrator | ok: Runtime: 0:02:57.003792 2025-05-19 22:25:27.550546 | 2025-05-19 22:25:27.550681 | TASK [Run checks] 2025-05-19 22:25:28.252750 | orchestrator | + set -e 2025-05-19 22:25:28.252923 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 22:25:28.252937 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 22:25:28.252947 | orchestrator | ++ INTERACTIVE=false 2025-05-19 22:25:28.252954 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 22:25:28.252959 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 22:25:28.252966 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 22:25:28.253472 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 22:25:28.258191 | orchestrator | 2025-05-19 22:25:28.258245 | orchestrator | # CHECK 2025-05-19 22:25:28.258250 | orchestrator | 2025-05-19 22:25:28.258255 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 22:25:28.258262 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 22:25:28.258267 | orchestrator | + echo 2025-05-19 22:25:28.258271 | orchestrator | + echo '# CHECK' 2025-05-19 22:25:28.258275 | orchestrator | + echo 2025-05-19 22:25:28.258293 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 22:25:28.259455 | orchestrator | ++ semver latest 5.0.0 2025-05-19 22:25:28.318508 | orchestrator | 2025-05-19 22:25:28.318610 | orchestrator | ## Containers @ testbed-manager 2025-05-19 22:25:28.318624 | orchestrator | 2025-05-19 22:25:28.318635 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 22:25:28.318644 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 22:25:28.318653 | orchestrator | + echo 2025-05-19 22:25:28.318662 | orchestrator | + echo '## Containers @ testbed-manager' 2025-05-19 22:25:28.318671 | orchestrator | + echo 2025-05-19 22:25:28.318713 | orchestrator | + osism container testbed-manager ps 2025-05-19 22:25:30.630323 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 22:25:30.630453 | orchestrator | 420ba536fd54 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-05-19 22:25:30.630473 | orchestrator | 8397f007eee4 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_alertmanager 2025-05-19 22:25:30.630483 | orchestrator | 2c85d9ac46fd registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-19 22:25:30.630493 | orchestrator | 4c52a1fa6335 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-05-19 22:25:30.630502 | orchestrator | f9212a4eff43 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-05-19 22:25:30.630516 | orchestrator | 78d5892e94c4 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-05-19 22:25:30.630527 | orchestrator | cf6f79148976 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-19 22:25:30.630536 | orchestrator | 0964ba681d2c registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-19 22:25:30.630546 | orchestrator | 43fa717028a5 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-19 22:25:30.630579 | orchestrator | cec6b3d0c5a2 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-05-19 22:25:30.630588 | orchestrator | e04224cfbb23 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-05-19 22:25:30.630597 | orchestrator | 1c2a4ec18bd1 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-05-19 22:25:30.630606 | orchestrator | 986b74aacd9e registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 50 minutes ago Up 50 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-05-19 22:25:30.630615 | orchestrator | c9fbaaff6476 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 54 minutes ago Up 53 minutes (healthy) manager-inventory_reconciler-1 2025-05-19 22:25:30.630625 | orchestrator | 29c89377b9ee registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) ceph-ansible 2025-05-19 22:25:30.630651 | orchestrator | abfb66be47c0 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) kolla-ansible 2025-05-19 22:25:30.630667 | orchestrator | b6bd7f7eed81 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) osism-kubernetes 2025-05-19 22:25:30.630700 | orchestrator | bdcb94933664 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 54 minutes ago Up 53 minutes (healthy) osism-ansible 2025-05-19 22:25:30.630709 | orchestrator | 12940538b3e4 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 54 minutes ago Up 53 minutes (healthy) 8000/tcp manager-ara-server-1 2025-05-19 22:25:30.630719 | orchestrator | cbf5451e0dc5 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 54 minutes ago Up 54 minutes (healthy) osismclient 2025-05-19 22:25:30.630728 | orchestrator | ef906b62e266 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-conductor-1 2025-05-19 22:25:30.630737 | orchestrator | 7d6f1337148e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Restarting (0) 53 seconds ago manager-api-1 2025-05-19 22:25:30.631323 | orchestrator | f5cf19a167c4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-watchdog-1 2025-05-19 22:25:30.631433 | orchestrator | 30e0856dc17a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-beat-1 2025-05-19 22:25:30.631447 | orchestrator | 6bef107faabd registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-flower-1 2025-05-19 22:25:30.631456 | orchestrator | 8d219dae6951 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-netbox-1 2025-05-19 22:25:30.631464 | orchestrator | 945f4af2843b registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 6379/tcp manager-redis-1 2025-05-19 22:25:30.631471 | orchestrator | 2cfe900b1be0 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 54 minutes ago Up 54 minutes (healthy) 3306/tcp manager-mariadb-1 2025-05-19 22:25:30.631479 | orchestrator | 2df4b98c97ae registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-listener-1 2025-05-19 22:25:30.631487 | orchestrator | d98d56b8b6b6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 54 minutes ago Up 54 minutes (healthy) manager-openstack-1 2025-05-19 22:25:30.631504 | orchestrator | 98f7349a31dc registry.osism.tech/osism/netbox:v4.2.2 "/opt/netbox/venv/bi…" About an hour ago Up 55 minutes (healthy) netbox-netbox-worker-1 2025-05-19 22:25:30.631516 | orchestrator | 347a7d186385 registry.osism.tech/osism/netbox:v4.2.2 "/usr/bin/tini -- /o…" About an hour ago Up 59 minutes (healthy) netbox-netbox-1 2025-05-19 22:25:30.631528 | orchestrator | 4386542c128e registry.osism.tech/dockerhub/library/redis:7.4.3-alpine "docker-entrypoint.s…" About an hour ago Up 59 minutes (healthy) 6379/tcp netbox-redis-1 2025-05-19 22:25:30.631541 | orchestrator | 38062cb965ae registry.osism.tech/dockerhub/library/postgres:16.9-alpine "docker-entrypoint.s…" About an hour ago Up 59 minutes (healthy) 5432/tcp netbox-postgres-1 2025-05-19 22:25:30.631553 | orchestrator | 17b2d218999a registry.osism.tech/dockerhub/library/traefik:v3.4.0 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-05-19 22:25:30.957348 | orchestrator | 2025-05-19 22:25:30.957486 | orchestrator | ## Images @ testbed-manager 2025-05-19 22:25:30.957512 | orchestrator | 2025-05-19 22:25:30.957525 | orchestrator | + echo 2025-05-19 22:25:30.957537 | orchestrator | + echo '## Images @ testbed-manager' 2025-05-19 22:25:30.957549 | orchestrator | + echo 2025-05-19 22:25:30.957563 | orchestrator | + osism container testbed-manager images 2025-05-19 22:25:33.119412 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 22:25:33.119532 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest ab7710f373c8 2 hours ago 306MB 2025-05-19 22:25:33.119574 | orchestrator | registry.osism.tech/osism/osism-ansible latest 8eed7de5b6d7 2 hours ago 556MB 2025-05-19 22:25:33.119587 | orchestrator | registry.osism.tech/osism/homer v25.05.2 df83d86990c5 19 hours ago 11MB 2025-05-19 22:25:33.119598 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 ffbdd10a1d31 19 hours ago 225MB 2025-05-19 22:25:33.119610 | orchestrator | registry.osism.tech/osism/cephclient reef 274f9656897d 19 hours ago 453MB 2025-05-19 22:25:33.119621 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 21 hours ago 325MB 2025-05-19 22:25:33.119632 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 21 hours ago 635MB 2025-05-19 22:25:33.119643 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 21 hours ago 753MB 2025-05-19 22:25:33.119654 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 0095342773fe 21 hours ago 898MB 2025-05-19 22:25:33.119665 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 21 hours ago 417MB 2025-05-19 22:25:33.119743 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 18f9e2129e03 21 hours ago 463MB 2025-05-19 22:25:33.119756 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 21 hours ago 365MB 2025-05-19 22:25:33.119767 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 b4a68173fb3b 21 hours ago 367MB 2025-05-19 22:25:33.119777 | orchestrator | registry.osism.tech/osism/osism latest c1760045c1e2 22 hours ago 339MB 2025-05-19 22:25:33.119788 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 293cd8fb4739 22 hours ago 573MB 2025-05-19 22:25:33.119799 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 3a406958f36e 22 hours ago 1.2GB 2025-05-19 22:25:33.119810 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 4c0257b04a85 22 hours ago 537MB 2025-05-19 22:25:33.119820 | orchestrator | registry.osism.tech/dockerhub/library/postgres 16.9-alpine b56133b65cd3 11 days ago 275MB 2025-05-19 22:25:33.119901 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.0 79e66182ffbe 2 weeks ago 224MB 2025-05-19 22:25:33.119915 | orchestrator | registry.osism.tech/dockerhub/hashicorp/vault 1.19.3 272792d172e0 2 weeks ago 504MB 2025-05-19 22:25:33.119927 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.3-alpine 9a07b03a1871 3 weeks ago 41.4MB 2025-05-19 22:25:33.119939 | orchestrator | registry.osism.tech/osism/netbox v4.2.2 de0f89b61971 7 weeks ago 817MB 2025-05-19 22:25:33.119950 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-05-19 22:25:33.119978 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 3 months ago 571MB 2025-05-19 22:25:33.119990 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 8 months ago 300MB 2025-05-19 22:25:33.120001 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-05-19 22:25:33.461644 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 22:25:33.462233 | orchestrator | ++ semver latest 5.0.0 2025-05-19 22:25:33.515773 | orchestrator | 2025-05-19 22:25:33.515869 | orchestrator | ## Containers @ testbed-node-0 2025-05-19 22:25:33.515885 | orchestrator | 2025-05-19 22:25:33.515899 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 22:25:33.515913 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 22:25:33.515926 | orchestrator | + echo 2025-05-19 22:25:33.515940 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-05-19 22:25:33.515955 | orchestrator | + echo 2025-05-19 22:25:33.515969 | orchestrator | + osism container testbed-node-0 ps 2025-05-19 22:25:35.890617 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 22:25:35.890772 | orchestrator | dcb4da638108 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 22:25:35.890792 | orchestrator | 6ff9dee032b8 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 22:25:35.890805 | orchestrator | deb58585372f registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-05-19 22:25:35.890817 | orchestrator | 50d352e987ac registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-19 22:25:35.890828 | orchestrator | ce1ebd03e04a registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-19 22:25:35.890841 | orchestrator | 06c9af3c35a7 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 22:25:35.890873 | orchestrator | efb68e4af584 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-19 22:25:35.890885 | orchestrator | 35a5290a7a87 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-05-19 22:25:35.890896 | orchestrator | 4118b2110e50 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 22:25:35.890907 | orchestrator | f14330bec537 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 22:25:35.890918 | orchestrator | 23ffb177b535 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-19 22:25:35.890930 | orchestrator | a6de61ea6aa8 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-05-19 22:25:35.890941 | orchestrator | 8f61e9baffe7 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-19 22:25:35.890952 | orchestrator | 508b55b0cd99 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-19 22:25:35.890963 | orchestrator | b351723b1777 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-19 22:25:35.890974 | orchestrator | e35387fb5895 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-19 22:25:35.891006 | orchestrator | 8649954e1b95 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 22:25:35.891018 | orchestrator | b7cb056efc58 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-19 22:25:35.891035 | orchestrator | c5e5d7d1e557 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-19 22:25:35.891069 | orchestrator | 923b5490a7be registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 22:25:35.891081 | orchestrator | e2f909e6002d registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-19 22:25:35.891114 | orchestrator | 858c5fd33e52 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-19 22:25:35.891127 | orchestrator | b500b8b3a5c1 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-19 22:25:35.891138 | orchestrator | 3dca4c9bc7a2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-05-19 22:25:35.891150 | orchestrator | aa302e6f5d90 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-19 22:25:35.891161 | orchestrator | 9d2664184fe2 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-19 22:25:35.891172 | orchestrator | 5e546db784ab registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-19 22:25:35.891183 | orchestrator | 47d11d760dad registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-19 22:25:35.891195 | orchestrator | 7c66535c7793 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-19 22:25:35.891205 | orchestrator | 868410b1b2e8 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-19 22:25:35.891216 | orchestrator | 9cc43c2c7177 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-05-19 22:25:35.891228 | orchestrator | 2635cc40c26d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-05-19 22:25:35.891239 | orchestrator | d203fd407646 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-19 22:25:35.891250 | orchestrator | f8f35efd598e registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-05-19 22:25:35.891261 | orchestrator | 00d4eca0d36a registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-19 22:25:35.891272 | orchestrator | a9276246c986 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-19 22:25:35.891283 | orchestrator | 3b9b5da8e5f9 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-05-19 22:25:35.891294 | orchestrator | bfc9912ae896 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-05-19 22:25:35.891313 | orchestrator | f9b7dc758a55 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-05-19 22:25:35.891325 | orchestrator | d0d461f477dd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-05-19 22:25:35.891335 | orchestrator | 1b03f6acc626 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-05-19 22:25:35.891346 | orchestrator | 1e63663a4174 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-19 22:25:35.891363 | orchestrator | cd41c622b60e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-19 22:25:35.891375 | orchestrator | 6c8cd7be7418 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-05-19 22:25:35.891399 | orchestrator | 8b5bf393410c registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-05-19 22:25:35.891411 | orchestrator | 9921e3194bed registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-05-19 22:25:35.891422 | orchestrator | 374c13e40775 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-05-19 22:25:35.891433 | orchestrator | 5b0df4b99eb1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-05-19 22:25:35.891444 | orchestrator | 445b6c2089f3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-05-19 22:25:35.891455 | orchestrator | f2d7a867a904 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-19 22:25:35.891466 | orchestrator | a36e3a8441b5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-19 22:25:35.891477 | orchestrator | 9abc7833c6eb registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-05-19 22:25:35.891489 | orchestrator | dc8a03b43077 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-19 22:25:35.891499 | orchestrator | 480ac5898258 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-19 22:25:35.891510 | orchestrator | 6a6d00f635d3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-19 22:25:35.891521 | orchestrator | 4d573a7b3fe8 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-05-19 22:25:35.891532 | orchestrator | b50a04a93b8c registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-05-19 22:25:36.221143 | orchestrator | 2025-05-19 22:25:36.221263 | orchestrator | ## Images @ testbed-node-0 2025-05-19 22:25:36.221280 | orchestrator | 2025-05-19 22:25:36.221292 | orchestrator | + echo 2025-05-19 22:25:36.221304 | orchestrator | + echo '## Images @ testbed-node-0' 2025-05-19 22:25:36.221342 | orchestrator | + echo 2025-05-19 22:25:36.221354 | orchestrator | + osism container testbed-node-0 images 2025-05-19 22:25:38.577401 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 22:25:38.577529 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 19 hours ago 1.27GB 2025-05-19 22:25:38.577545 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 21 hours ago 325MB 2025-05-19 22:25:38.577558 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 21 hours ago 1.02GB 2025-05-19 22:25:38.577569 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 21 hours ago 325MB 2025-05-19 22:25:38.577580 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 21 hours ago 425MB 2025-05-19 22:25:38.577591 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 21 hours ago 635MB 2025-05-19 22:25:38.577603 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 21 hours ago 333MB 2025-05-19 22:25:38.577614 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 21 hours ago 753MB 2025-05-19 22:25:38.577625 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 21 hours ago 336MB 2025-05-19 22:25:38.577636 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 21 hours ago 382MB 2025-05-19 22:25:38.577647 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 21 hours ago 1.56GB 2025-05-19 22:25:38.577658 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 21 hours ago 1.6GB 2025-05-19 22:25:38.577749 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 21 hours ago 1.22GB 2025-05-19 22:25:38.577764 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 21 hours ago 331MB 2025-05-19 22:25:38.577775 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 21 hours ago 331MB 2025-05-19 22:25:38.577786 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 21 hours ago 597MB 2025-05-19 22:25:38.577797 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 21 hours ago 358MB 2025-05-19 22:25:38.577808 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 21 hours ago 417MB 2025-05-19 22:25:38.577819 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 21 hours ago 351MB 2025-05-19 22:25:38.577830 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 21 hours ago 360MB 2025-05-19 22:25:38.577840 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 21 hours ago 365MB 2025-05-19 22:25:38.577851 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 21 hours ago 368MB 2025-05-19 22:25:38.577862 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 21 hours ago 368MB 2025-05-19 22:25:38.577873 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 21 hours ago 1.25GB 2025-05-19 22:25:38.577884 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 21 hours ago 1.14GB 2025-05-19 22:25:38.577895 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 21 hours ago 1.11GB 2025-05-19 22:25:38.577906 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 21 hours ago 1.12GB 2025-05-19 22:25:38.577925 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 21 hours ago 1.31GB 2025-05-19 22:25:38.577957 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 21 hours ago 1.2GB 2025-05-19 22:25:38.577969 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 21 hours ago 1.05GB 2025-05-19 22:25:38.577980 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 21 hours ago 1.16GB 2025-05-19 22:25:38.577991 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 21 hours ago 1.43GB 2025-05-19 22:25:38.578002 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 21 hours ago 1.3GB 2025-05-19 22:25:38.578014 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 21 hours ago 1.3GB 2025-05-19 22:25:38.578146 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 21 hours ago 1.3GB 2025-05-19 22:25:38.578188 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 af49ea0edd22 21 hours ago 1.05GB 2025-05-19 22:25:38.578236 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 bbfe1a7ce4ee 21 hours ago 1.05GB 2025-05-19 22:25:38.578257 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 21 hours ago 1.06GB 2025-05-19 22:25:38.578277 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 21 hours ago 1.06GB 2025-05-19 22:25:38.578296 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 21 hours ago 1.06GB 2025-05-19 22:25:38.578312 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 21 hours ago 1.06GB 2025-05-19 22:25:38.578323 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 21 hours ago 1.06GB 2025-05-19 22:25:38.578334 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 21 hours ago 1.06GB 2025-05-19 22:25:38.578344 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 21 hours ago 1.41GB 2025-05-19 22:25:38.578355 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 21 hours ago 1.41GB 2025-05-19 22:25:38.578366 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 da9d8a7c6e9c 21 hours ago 1.05GB 2025-05-19 22:25:38.578377 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 01f8d389f8cb 21 hours ago 1.05GB 2025-05-19 22:25:38.578387 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 291e9335d6ec 21 hours ago 1.05GB 2025-05-19 22:25:38.578398 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 66fe060a7ad0 21 hours ago 1.05GB 2025-05-19 22:25:38.578409 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 21 hours ago 1.13GB 2025-05-19 22:25:38.578419 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 21 hours ago 1.1GB 2025-05-19 22:25:38.578430 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 21 hours ago 1.1GB 2025-05-19 22:25:38.578441 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 21 hours ago 1.1GB 2025-05-19 22:25:38.578452 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 21 hours ago 1.13GB 2025-05-19 22:25:38.578462 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 a0c2f314ca0b 21 hours ago 1.11GB 2025-05-19 22:25:38.578473 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 b10e6b6ed9bf 21 hours ago 1.12GB 2025-05-19 22:25:38.578484 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 21 hours ago 1.06GB 2025-05-19 22:25:38.578494 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 21 hours ago 1.07GB 2025-05-19 22:25:38.578517 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 21 hours ago 1.07GB 2025-05-19 22:25:38.578535 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 21 hours ago 953MB 2025-05-19 22:25:38.578547 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 21 hours ago 953MB 2025-05-19 22:25:38.578558 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 21 hours ago 954MB 2025-05-19 22:25:38.578569 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 21 hours ago 954MB 2025-05-19 22:25:38.925143 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 22:25:38.925376 | orchestrator | ++ semver latest 5.0.0 2025-05-19 22:25:38.983486 | orchestrator | 2025-05-19 22:25:38.983586 | orchestrator | ## Containers @ testbed-node-1 2025-05-19 22:25:38.983596 | orchestrator | 2025-05-19 22:25:38.983603 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 22:25:38.983610 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 22:25:38.983617 | orchestrator | + echo 2025-05-19 22:25:38.983623 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-05-19 22:25:38.983630 | orchestrator | + echo 2025-05-19 22:25:38.983640 | orchestrator | + osism container testbed-node-1 ps 2025-05-19 22:25:41.373880 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 22:25:41.373977 | orchestrator | 9654512d9a62 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 22:25:41.373996 | orchestrator | c9183a589bce registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 22:25:41.374009 | orchestrator | 1f46b16d34eb registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-05-19 22:25:41.374065 | orchestrator | a2a4a6a99822 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-19 22:25:41.374077 | orchestrator | 3cc272b86558 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-19 22:25:41.374093 | orchestrator | cfdf60850a3f registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-19 22:25:41.374105 | orchestrator | 22718bdadda8 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 22:25:41.374117 | orchestrator | 5ec534ae24c2 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_api 2025-05-19 22:25:41.374128 | orchestrator | 083b7e8b3163 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 22:25:41.374139 | orchestrator | acd19223c5a7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 22:25:41.374151 | orchestrator | f8df4942b8f1 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-05-19 22:25:41.374163 | orchestrator | 6f3830759b2b registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-19 22:25:41.374175 | orchestrator | 734cbbfa020e registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-19 22:25:41.374205 | orchestrator | 41bb7027f1b5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-19 22:25:41.374218 | orchestrator | 100b84efd33a registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-19 22:25:41.374229 | orchestrator | 936bbd65eb15 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-19 22:25:41.374254 | orchestrator | b438f035291a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 22:25:41.374265 | orchestrator | 6cdb76c01240 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-19 22:25:41.374276 | orchestrator | 8981d66b28de registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-19 22:25:41.374289 | orchestrator | 73e05734d8d5 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 22:25:41.374300 | orchestrator | bc3d28607261 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-19 22:25:41.374324 | orchestrator | 34a5633460ee registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-19 22:25:41.374331 | orchestrator | a44f9b990d31 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-19 22:25:41.374338 | orchestrator | 03d9a2863db5 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-05-19 22:25:41.374345 | orchestrator | c116c5699b99 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-19 22:25:41.374351 | orchestrator | 7fedf5465474 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-19 22:25:41.374358 | orchestrator | b61e6615e41b registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-19 22:25:41.374365 | orchestrator | 6e8d55d4e710 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-19 22:25:41.374371 | orchestrator | 919222796a1a registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-19 22:25:41.374378 | orchestrator | 7cf7e2cbc49a registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-05-19 22:25:41.374384 | orchestrator | d73a5c6fbc26 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-19 22:25:41.374391 | orchestrator | f07d37c5458f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-05-19 22:25:41.374403 | orchestrator | 8138eac253b1 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-19 22:25:41.374410 | orchestrator | f8e16a082dab registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-19 22:25:41.374416 | orchestrator | a55bf91c2aca registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-05-19 22:25:41.374423 | orchestrator | ce395902983e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-19 22:25:41.374431 | orchestrator | 19f2ceff8e35 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-19 22:25:41.374439 | orchestrator | 01d9b94535a7 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-05-19 22:25:41.374446 | orchestrator | 4b7955935ea2 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-19 22:25:41.374454 | orchestrator | 9a55a8013167 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-05-19 22:25:41.374465 | orchestrator | 660a68fcae9c registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 22 minutes keepalived 2025-05-19 22:25:41.374475 | orchestrator | 41d80bcde0c4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-19 22:25:41.374487 | orchestrator | 5f43ccacc38e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-19 22:25:41.374499 | orchestrator | 57af987fdb9a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-05-19 22:25:41.374518 | orchestrator | 4c892dc0d518 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-05-19 22:25:41.374530 | orchestrator | a99b7ac8c5d9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-05-19 22:25:41.374542 | orchestrator | 977c888dbe13 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-05-19 22:25:41.374554 | orchestrator | 7d8437dc9cd6 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-19 22:25:41.374566 | orchestrator | e340ae1a2d46 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-05-19 22:25:41.374578 | orchestrator | 6d564a9ab608 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-19 22:25:41.374590 | orchestrator | cdd0c15e6f48 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-19 22:25:41.374602 | orchestrator | acbcde29f804 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-05-19 22:25:41.374622 | orchestrator | ec1931d48dd4 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-19 22:25:41.374635 | orchestrator | 00b0ed4981c4 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-19 22:25:41.374647 | orchestrator | 80b83de4c45e registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-19 22:25:41.374658 | orchestrator | c35164f4fb4e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-19 22:25:41.374688 | orchestrator | 8b59814237a5 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-19 22:25:41.594316 | orchestrator | 2025-05-19 22:25:41.594409 | orchestrator | ## Images @ testbed-node-1 2025-05-19 22:25:41.594426 | orchestrator | 2025-05-19 22:25:41.594439 | orchestrator | + echo 2025-05-19 22:25:41.594451 | orchestrator | + echo '## Images @ testbed-node-1' 2025-05-19 22:25:41.594464 | orchestrator | + echo 2025-05-19 22:25:41.594476 | orchestrator | + osism container testbed-node-1 images 2025-05-19 22:25:43.539331 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 22:25:43.539387 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 19 hours ago 1.27GB 2025-05-19 22:25:43.539392 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 21 hours ago 325MB 2025-05-19 22:25:43.539397 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 21 hours ago 1.02GB 2025-05-19 22:25:43.539401 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 21 hours ago 325MB 2025-05-19 22:25:43.539404 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 21 hours ago 425MB 2025-05-19 22:25:43.539408 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 21 hours ago 635MB 2025-05-19 22:25:43.539412 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 21 hours ago 333MB 2025-05-19 22:25:43.539416 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 21 hours ago 753MB 2025-05-19 22:25:43.539420 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 21 hours ago 336MB 2025-05-19 22:25:43.539424 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 21 hours ago 382MB 2025-05-19 22:25:43.539427 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 21 hours ago 1.56GB 2025-05-19 22:25:43.539431 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 21 hours ago 1.6GB 2025-05-19 22:25:43.539435 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 21 hours ago 1.22GB 2025-05-19 22:25:43.539439 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 21 hours ago 331MB 2025-05-19 22:25:43.539443 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 21 hours ago 331MB 2025-05-19 22:25:43.539446 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 21 hours ago 597MB 2025-05-19 22:25:43.539459 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 21 hours ago 358MB 2025-05-19 22:25:43.539464 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 21 hours ago 417MB 2025-05-19 22:25:43.539467 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 21 hours ago 351MB 2025-05-19 22:25:43.539480 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 21 hours ago 360MB 2025-05-19 22:25:43.539484 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 21 hours ago 365MB 2025-05-19 22:25:43.539487 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 21 hours ago 368MB 2025-05-19 22:25:43.539491 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 21 hours ago 368MB 2025-05-19 22:25:43.539495 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 21 hours ago 1.25GB 2025-05-19 22:25:43.539499 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 21 hours ago 1.14GB 2025-05-19 22:25:43.539502 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 21 hours ago 1.11GB 2025-05-19 22:25:43.539506 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 21 hours ago 1.12GB 2025-05-19 22:25:43.539510 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 21 hours ago 1.31GB 2025-05-19 22:25:43.539514 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 21 hours ago 1.2GB 2025-05-19 22:25:43.539518 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 21 hours ago 1.05GB 2025-05-19 22:25:43.539522 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 21 hours ago 1.16GB 2025-05-19 22:25:43.539526 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 21 hours ago 1.43GB 2025-05-19 22:25:43.539529 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 21 hours ago 1.3GB 2025-05-19 22:25:43.539533 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 21 hours ago 1.3GB 2025-05-19 22:25:43.539537 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 21 hours ago 1.3GB 2025-05-19 22:25:43.539540 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 21 hours ago 1.06GB 2025-05-19 22:25:43.539552 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 21 hours ago 1.06GB 2025-05-19 22:25:43.539556 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 21 hours ago 1.06GB 2025-05-19 22:25:43.539560 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 21 hours ago 1.06GB 2025-05-19 22:25:43.539564 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 21 hours ago 1.06GB 2025-05-19 22:25:43.539567 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 21 hours ago 1.06GB 2025-05-19 22:25:43.539571 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 21 hours ago 1.41GB 2025-05-19 22:25:43.539575 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 21 hours ago 1.41GB 2025-05-19 22:25:43.539579 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 21 hours ago 1.13GB 2025-05-19 22:25:43.539582 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 21 hours ago 1.1GB 2025-05-19 22:25:43.539586 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 21 hours ago 1.1GB 2025-05-19 22:25:43.539590 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 21 hours ago 1.1GB 2025-05-19 22:25:43.539594 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 21 hours ago 1.13GB 2025-05-19 22:25:43.539600 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 21 hours ago 1.06GB 2025-05-19 22:25:43.539604 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 21 hours ago 1.07GB 2025-05-19 22:25:43.539607 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 21 hours ago 1.07GB 2025-05-19 22:25:43.539611 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 21 hours ago 953MB 2025-05-19 22:25:43.539615 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 21 hours ago 953MB 2025-05-19 22:25:43.539619 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 21 hours ago 954MB 2025-05-19 22:25:43.539622 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 21 hours ago 954MB 2025-05-19 22:25:43.782066 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-05-19 22:25:43.782870 | orchestrator | ++ semver latest 5.0.0 2025-05-19 22:25:43.831878 | orchestrator | 2025-05-19 22:25:43.831980 | orchestrator | ## Containers @ testbed-node-2 2025-05-19 22:25:43.832010 | orchestrator | 2025-05-19 22:25:43.832028 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 22:25:43.832046 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 22:25:43.832064 | orchestrator | + echo 2025-05-19 22:25:43.832083 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-05-19 22:25:43.832100 | orchestrator | + echo 2025-05-19 22:25:43.832117 | orchestrator | + osism container testbed-node-2 ps 2025-05-19 22:25:46.165795 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-05-19 22:25:46.165909 | orchestrator | 8b7bcebb46a8 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-05-19 22:25:46.165924 | orchestrator | 398af1a41066 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-05-19 22:25:46.165933 | orchestrator | d25f8c63dff7 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_health_manager 2025-05-19 22:25:46.165943 | orchestrator | 37cb74683863 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-05-19 22:25:46.165971 | orchestrator | d37c30af795d registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-05-19 22:25:46.165980 | orchestrator | b10a22857dad registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-05-19 22:25:46.165989 | orchestrator | b7410482579c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-05-19 22:25:46.165998 | orchestrator | a6e9355ac8e3 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-05-19 22:25:46.166007 | orchestrator | b671b112a21e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-05-19 22:25:46.166071 | orchestrator | 19f65dc33fa7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-05-19 22:25:46.166081 | orchestrator | 23d76493e25e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-05-19 22:25:46.166090 | orchestrator | c882e4d58713 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-05-19 22:25:46.166136 | orchestrator | c31fbec77b39 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-05-19 22:25:46.166157 | orchestrator | c406cf7c3b8a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-05-19 22:25:46.166167 | orchestrator | 3a9d9fbe92d5 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-05-19 22:25:46.166176 | orchestrator | 39b4c1a715b9 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-05-19 22:25:46.166185 | orchestrator | f5dddeae3d4d registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-05-19 22:25:46.166194 | orchestrator | 50a64ad27487 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-05-19 22:25:46.166202 | orchestrator | 47e5e7bbf652 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-05-19 22:25:46.166211 | orchestrator | 6aff9b9ac4e9 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-05-19 22:25:46.166223 | orchestrator | 0a0091179f8d registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-05-19 22:25:46.166258 | orchestrator | a28a03b8f8bb registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-05-19 22:25:46.166272 | orchestrator | 037882dd574c registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-05-19 22:25:46.166286 | orchestrator | 79e2d33b038f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-05-19 22:25:46.166300 | orchestrator | 3614096e36c8 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-05-19 22:25:46.166315 | orchestrator | 4c976e44ca26 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-05-19 22:25:46.166331 | orchestrator | 4aca303bd282 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-05-19 22:25:46.166346 | orchestrator | 3fbba3c6815c registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-05-19 22:25:46.166361 | orchestrator | 42d0ce5830eb registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) cinder_api 2025-05-19 22:25:46.166376 | orchestrator | 21acbf8d3f7b registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-05-19 22:25:46.166391 | orchestrator | 6fc5f8e00f51 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-05-19 22:25:46.166416 | orchestrator | fa7168f43836 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-05-19 22:25:46.166431 | orchestrator | f9ddd7c26bbb registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-05-19 22:25:46.166446 | orchestrator | 9951de97d677 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-05-19 22:25:46.166463 | orchestrator | e704dace5f06 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-05-19 22:25:46.166478 | orchestrator | bd773ef61ca6 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-05-19 22:25:46.166493 | orchestrator | b0b061faf43b registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-05-19 22:25:46.166508 | orchestrator | 7c5fd907bcad registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-05-19 22:25:46.166530 | orchestrator | 060e391c5e74 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-05-19 22:25:46.166546 | orchestrator | 55f0c7270897 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-05-19 22:25:46.166561 | orchestrator | e0cd9a0692df registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-05-19 22:25:46.166576 | orchestrator | 022a36a607ca registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-05-19 22:25:46.166591 | orchestrator | f28fde25ef33 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-05-19 22:25:46.166607 | orchestrator | eeeaaa97230a registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-05-19 22:25:46.166633 | orchestrator | 08986ed702e2 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-05-19 22:25:46.166649 | orchestrator | 4711bbe55504 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-05-19 22:25:46.166691 | orchestrator | 1f94096f199a registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-05-19 22:25:46.166706 | orchestrator | 8848d15c65e3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-05-19 22:25:46.166720 | orchestrator | d3ddfae0a39c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-05-19 22:25:46.166735 | orchestrator | 22f27ceca4da registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-05-19 22:25:46.166751 | orchestrator | e7a6ec39ee7c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-05-19 22:25:46.166777 | orchestrator | 7f0ef754efac registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-05-19 22:25:46.166794 | orchestrator | cc0a15010e10 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-05-19 22:25:46.166809 | orchestrator | f68ecb53cda2 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-05-19 22:25:46.166824 | orchestrator | 7ce7b1d75b65 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-05-19 22:25:46.166840 | orchestrator | 60a2314d4837 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-05-19 22:25:46.166855 | orchestrator | 9868edc0017a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-05-19 22:25:46.539914 | orchestrator | 2025-05-19 22:25:46.540006 | orchestrator | ## Images @ testbed-node-2 2025-05-19 22:25:46.540015 | orchestrator | 2025-05-19 22:25:46.540021 | orchestrator | + echo 2025-05-19 22:25:46.540028 | orchestrator | + echo '## Images @ testbed-node-2' 2025-05-19 22:25:46.540036 | orchestrator | + echo 2025-05-19 22:25:46.540043 | orchestrator | + osism container testbed-node-2 images 2025-05-19 22:25:48.768433 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-05-19 22:25:48.769191 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 337e4de32d05 19 hours ago 1.27GB 2025-05-19 22:25:48.769218 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 e210496370d6 21 hours ago 325MB 2025-05-19 22:25:48.769227 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 98f8932715ed 21 hours ago 1.02GB 2025-05-19 22:25:48.769234 | orchestrator | registry.osism.tech/kolla/cron 2024.2 d1f2ebfdaafa 21 hours ago 325MB 2025-05-19 22:25:48.769241 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 fb398e112a94 21 hours ago 425MB 2025-05-19 22:25:48.769247 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 ff61877c6f9a 21 hours ago 635MB 2025-05-19 22:25:48.769253 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 5a4e2ecd6cd8 21 hours ago 333MB 2025-05-19 22:25:48.769260 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 85cbe560a6a5 21 hours ago 753MB 2025-05-19 22:25:48.769266 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 764003e5a453 21 hours ago 336MB 2025-05-19 22:25:48.769272 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 aebda5188773 21 hours ago 382MB 2025-05-19 22:25:48.769278 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 3c81b3a03e26 21 hours ago 1.56GB 2025-05-19 22:25:48.769284 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 346c4c942746 21 hours ago 1.6GB 2025-05-19 22:25:48.769290 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 51c53010013a 21 hours ago 1.22GB 2025-05-19 22:25:48.769297 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 ac71644620ce 21 hours ago 331MB 2025-05-19 22:25:48.769303 | orchestrator | registry.osism.tech/kolla/redis 2024.2 5f35356f01de 21 hours ago 331MB 2025-05-19 22:25:48.769309 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 08fa394d0c53 21 hours ago 597MB 2025-05-19 22:25:48.769315 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 56d7d2734b69 21 hours ago 358MB 2025-05-19 22:25:48.769341 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 db7b45ced52e 21 hours ago 417MB 2025-05-19 22:25:48.769347 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 b47ed4892777 21 hours ago 351MB 2025-05-19 22:25:48.769353 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 34a9317295e9 21 hours ago 360MB 2025-05-19 22:25:48.769359 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 d8f8ac9b12c6 21 hours ago 365MB 2025-05-19 22:25:48.769365 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 409dc2f5ba10 21 hours ago 368MB 2025-05-19 22:25:48.769371 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 6be9e6161bec 21 hours ago 368MB 2025-05-19 22:25:48.769378 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 3aa83c27e689 21 hours ago 1.25GB 2025-05-19 22:25:48.769384 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 43932f4e274f 21 hours ago 1.14GB 2025-05-19 22:25:48.769390 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 58b8e4608c23 21 hours ago 1.11GB 2025-05-19 22:25:48.769396 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 41ccd10b0a05 21 hours ago 1.12GB 2025-05-19 22:25:48.769402 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 0a14cad7002e 21 hours ago 1.31GB 2025-05-19 22:25:48.769408 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 01f6e86e50b7 21 hours ago 1.2GB 2025-05-19 22:25:48.769413 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6c6c9e788527 21 hours ago 1.05GB 2025-05-19 22:25:48.769419 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 e5aa89b61516 21 hours ago 1.16GB 2025-05-19 22:25:48.769426 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 ec9dca657403 21 hours ago 1.43GB 2025-05-19 22:25:48.769447 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 439bee5c20ef 21 hours ago 1.3GB 2025-05-19 22:25:48.769455 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 68578d55bfec 21 hours ago 1.3GB 2025-05-19 22:25:48.769462 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 b79121c50e35 21 hours ago 1.3GB 2025-05-19 22:25:48.769468 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 dc3b6ea24901 21 hours ago 1.06GB 2025-05-19 22:25:48.769497 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 71c1782aa49f 21 hours ago 1.06GB 2025-05-19 22:25:48.769504 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 50ba6b6676a2 21 hours ago 1.06GB 2025-05-19 22:25:48.769510 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 3e093a9b579c 21 hours ago 1.06GB 2025-05-19 22:25:48.769516 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 5352fd2b8cef 21 hours ago 1.06GB 2025-05-19 22:25:48.769523 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 69c5fb32c728 21 hours ago 1.06GB 2025-05-19 22:25:48.769529 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 1cc196c1f755 21 hours ago 1.41GB 2025-05-19 22:25:48.769536 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 f2ba11db404f 21 hours ago 1.41GB 2025-05-19 22:25:48.769542 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 b4072c0c818e 21 hours ago 1.13GB 2025-05-19 22:25:48.769548 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 074824aa0e40 21 hours ago 1.1GB 2025-05-19 22:25:48.769554 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 faae1b094f64 21 hours ago 1.1GB 2025-05-19 22:25:48.769560 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 29c8ff23103b 21 hours ago 1.1GB 2025-05-19 22:25:48.769573 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 f7c07501a9b2 21 hours ago 1.13GB 2025-05-19 22:25:48.769580 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 bd68af59ca7c 21 hours ago 1.06GB 2025-05-19 22:25:48.769586 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 96b6bf2076dc 21 hours ago 1.07GB 2025-05-19 22:25:48.769593 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 2bf9cc1505b7 21 hours ago 1.07GB 2025-05-19 22:25:48.769599 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 d8b4138c1ae7 21 hours ago 953MB 2025-05-19 22:25:48.769605 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 2eaccb490396 21 hours ago 953MB 2025-05-19 22:25:48.769611 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 57e22b5fe5f4 21 hours ago 954MB 2025-05-19 22:25:48.769618 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 a250a9fddb69 21 hours ago 954MB 2025-05-19 22:25:49.130361 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-05-19 22:25:49.137161 | orchestrator | + set -e 2025-05-19 22:25:49.137228 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 22:25:49.138448 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 22:25:49.138482 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 22:25:49.138493 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 22:25:49.138504 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 22:25:49.138516 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 22:25:49.138533 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 22:25:49.138544 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 22:25:49.138556 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 22:25:49.138567 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 22:25:49.138578 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 22:25:49.138590 | orchestrator | ++ export ARA=false 2025-05-19 22:25:49.138601 | orchestrator | ++ ARA=false 2025-05-19 22:25:49.138612 | orchestrator | ++ export TEMPEST=false 2025-05-19 22:25:49.138623 | orchestrator | ++ TEMPEST=false 2025-05-19 22:25:49.138634 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 22:25:49.138644 | orchestrator | ++ IS_ZUUL=true 2025-05-19 22:25:49.138655 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 22:25:49.138713 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 22:25:49.138726 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 22:25:49.138737 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 22:25:49.138748 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 22:25:49.138758 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 22:25:49.138769 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 22:25:49.138780 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 22:25:49.138791 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 22:25:49.138802 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 22:25:49.138813 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-19 22:25:49.138824 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-05-19 22:25:49.143981 | orchestrator | + set -e 2025-05-19 22:25:49.144544 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 22:25:49.144572 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 22:25:49.144584 | orchestrator | ++ INTERACTIVE=false 2025-05-19 22:25:49.144595 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 22:25:49.144606 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 22:25:49.144618 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 22:25:49.145453 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 22:25:49.152051 | orchestrator | 2025-05-19 22:25:49.152115 | orchestrator | # Ceph status 2025-05-19 22:25:49.152134 | orchestrator | 2025-05-19 22:25:49.152155 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 22:25:49.152176 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 22:25:49.152196 | orchestrator | + echo 2025-05-19 22:25:49.152216 | orchestrator | + echo '# Ceph status' 2025-05-19 22:25:49.152236 | orchestrator | + echo 2025-05-19 22:25:49.152255 | orchestrator | + ceph -s 2025-05-19 22:25:49.760873 | orchestrator | cluster: 2025-05-19 22:25:49.760978 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-05-19 22:25:49.760992 | orchestrator | health: HEALTH_OK 2025-05-19 22:25:49.761023 | orchestrator | 2025-05-19 22:25:49.761030 | orchestrator | services: 2025-05-19 22:25:49.761037 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-05-19 22:25:49.761045 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-05-19 22:25:49.761053 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-05-19 22:25:49.761060 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-05-19 22:25:49.761067 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-05-19 22:25:49.761073 | orchestrator | 2025-05-19 22:25:49.761079 | orchestrator | data: 2025-05-19 22:25:49.761085 | orchestrator | volumes: 1/1 healthy 2025-05-19 22:25:49.761091 | orchestrator | pools: 14 pools, 401 pgs 2025-05-19 22:25:49.761098 | orchestrator | objects: 524 objects, 2.2 GiB 2025-05-19 22:25:49.761105 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-05-19 22:25:49.761111 | orchestrator | pgs: 401 active+clean 2025-05-19 22:25:49.761117 | orchestrator | 2025-05-19 22:25:49.811295 | orchestrator | 2025-05-19 22:25:49.811369 | orchestrator | # Ceph versions 2025-05-19 22:25:49.811374 | orchestrator | 2025-05-19 22:25:49.811379 | orchestrator | + echo 2025-05-19 22:25:49.811383 | orchestrator | + echo '# Ceph versions' 2025-05-19 22:25:49.811389 | orchestrator | + echo 2025-05-19 22:25:49.811393 | orchestrator | + ceph versions 2025-05-19 22:25:50.425328 | orchestrator | { 2025-05-19 22:25:50.425418 | orchestrator | "mon": { 2025-05-19 22:25:50.425428 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 22:25:50.425436 | orchestrator | }, 2025-05-19 22:25:50.425443 | orchestrator | "mgr": { 2025-05-19 22:25:50.425449 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 22:25:50.425455 | orchestrator | }, 2025-05-19 22:25:50.425461 | orchestrator | "osd": { 2025-05-19 22:25:50.425467 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-05-19 22:25:50.425473 | orchestrator | }, 2025-05-19 22:25:50.425479 | orchestrator | "mds": { 2025-05-19 22:25:50.425485 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 22:25:50.425491 | orchestrator | }, 2025-05-19 22:25:50.425497 | orchestrator | "rgw": { 2025-05-19 22:25:50.425503 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-05-19 22:25:50.425509 | orchestrator | }, 2025-05-19 22:25:50.425514 | orchestrator | "overall": { 2025-05-19 22:25:50.425521 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-05-19 22:25:50.425527 | orchestrator | } 2025-05-19 22:25:50.425533 | orchestrator | } 2025-05-19 22:25:50.479617 | orchestrator | 2025-05-19 22:25:50.479796 | orchestrator | # Ceph OSD tree 2025-05-19 22:25:50.479817 | orchestrator | 2025-05-19 22:25:50.479828 | orchestrator | + echo 2025-05-19 22:25:50.479839 | orchestrator | + echo '# Ceph OSD tree' 2025-05-19 22:25:50.479859 | orchestrator | + echo 2025-05-19 22:25:50.479875 | orchestrator | + ceph osd df tree 2025-05-19 22:25:51.052233 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-05-19 22:25:51.052357 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-05-19 22:25:51.052371 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-05-19 22:25:51.052381 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.46 0.92 186 up osd.0 2025-05-19 22:25:51.052390 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.37 1.08 202 up osd.4 2025-05-19 22:25:51.052399 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-05-19 22:25:51.052408 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.08 1.03 192 up osd.2 2025-05-19 22:25:51.052433 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.75 0.97 200 up osd.3 2025-05-19 22:25:51.052443 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-05-19 22:25:51.052487 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.57 1.11 213 up osd.1 2025-05-19 22:25:51.052498 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1003 MiB 1 KiB 74 MiB 19 GiB 5.26 0.89 177 up osd.5 2025-05-19 22:25:51.052507 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-05-19 22:25:51.052518 | orchestrator | MIN/MAX VAR: 0.89/1.11 STDDEV: 0.47 2025-05-19 22:25:51.102615 | orchestrator | 2025-05-19 22:25:51.102786 | orchestrator | # Ceph monitor status 2025-05-19 22:25:51.102813 | orchestrator | 2025-05-19 22:25:51.102872 | orchestrator | + echo 2025-05-19 22:25:51.102885 | orchestrator | + echo '# Ceph monitor status' 2025-05-19 22:25:51.102896 | orchestrator | + echo 2025-05-19 22:25:51.102908 | orchestrator | + ceph mon stat 2025-05-19 22:25:51.739248 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-05-19 22:25:51.799329 | orchestrator | 2025-05-19 22:25:51.799444 | orchestrator | # Ceph quorum status 2025-05-19 22:25:51.799466 | orchestrator | 2025-05-19 22:25:51.799479 | orchestrator | + echo 2025-05-19 22:25:51.799492 | orchestrator | + echo '# Ceph quorum status' 2025-05-19 22:25:51.799505 | orchestrator | + echo 2025-05-19 22:25:51.800330 | orchestrator | + ceph quorum_status 2025-05-19 22:25:51.800366 | orchestrator | + jq 2025-05-19 22:25:52.541977 | orchestrator | { 2025-05-19 22:25:52.542213 | orchestrator | "election_epoch": 8, 2025-05-19 22:25:52.542247 | orchestrator | "quorum": [ 2025-05-19 22:25:52.542274 | orchestrator | 0, 2025-05-19 22:25:52.542291 | orchestrator | 1, 2025-05-19 22:25:52.542308 | orchestrator | 2 2025-05-19 22:25:52.542326 | orchestrator | ], 2025-05-19 22:25:52.542343 | orchestrator | "quorum_names": [ 2025-05-19 22:25:52.542360 | orchestrator | "testbed-node-0", 2025-05-19 22:25:52.542377 | orchestrator | "testbed-node-1", 2025-05-19 22:25:52.542395 | orchestrator | "testbed-node-2" 2025-05-19 22:25:52.542412 | orchestrator | ], 2025-05-19 22:25:52.542430 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-05-19 22:25:52.542449 | orchestrator | "quorum_age": 1651, 2025-05-19 22:25:52.542465 | orchestrator | "features": { 2025-05-19 22:25:52.542483 | orchestrator | "quorum_con": "4540138322906710015", 2025-05-19 22:25:52.542501 | orchestrator | "quorum_mon": [ 2025-05-19 22:25:52.542519 | orchestrator | "kraken", 2025-05-19 22:25:52.542538 | orchestrator | "luminous", 2025-05-19 22:25:52.542557 | orchestrator | "mimic", 2025-05-19 22:25:52.542576 | orchestrator | "osdmap-prune", 2025-05-19 22:25:52.542596 | orchestrator | "nautilus", 2025-05-19 22:25:52.542614 | orchestrator | "octopus", 2025-05-19 22:25:52.542633 | orchestrator | "pacific", 2025-05-19 22:25:52.542707 | orchestrator | "elector-pinging", 2025-05-19 22:25:52.542728 | orchestrator | "quincy", 2025-05-19 22:25:52.542741 | orchestrator | "reef" 2025-05-19 22:25:52.542754 | orchestrator | ] 2025-05-19 22:25:52.542766 | orchestrator | }, 2025-05-19 22:25:52.542778 | orchestrator | "monmap": { 2025-05-19 22:25:52.542791 | orchestrator | "epoch": 1, 2025-05-19 22:25:52.542803 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-05-19 22:25:52.542817 | orchestrator | "modified": "2025-05-19T21:58:01.800696Z", 2025-05-19 22:25:52.542830 | orchestrator | "created": "2025-05-19T21:58:01.800696Z", 2025-05-19 22:25:52.542843 | orchestrator | "min_mon_release": 18, 2025-05-19 22:25:52.542854 | orchestrator | "min_mon_release_name": "reef", 2025-05-19 22:25:52.542865 | orchestrator | "election_strategy": 1, 2025-05-19 22:25:52.542876 | orchestrator | "disallowed_leaders: ": "", 2025-05-19 22:25:52.542887 | orchestrator | "stretch_mode": false, 2025-05-19 22:25:52.542898 | orchestrator | "tiebreaker_mon": "", 2025-05-19 22:25:52.542909 | orchestrator | "removed_ranks: ": "", 2025-05-19 22:25:52.542925 | orchestrator | "features": { 2025-05-19 22:25:52.542943 | orchestrator | "persistent": [ 2025-05-19 22:25:52.542961 | orchestrator | "kraken", 2025-05-19 22:25:52.542979 | orchestrator | "luminous", 2025-05-19 22:25:52.542999 | orchestrator | "mimic", 2025-05-19 22:25:52.543017 | orchestrator | "osdmap-prune", 2025-05-19 22:25:52.543036 | orchestrator | "nautilus", 2025-05-19 22:25:52.543047 | orchestrator | "octopus", 2025-05-19 22:25:52.543088 | orchestrator | "pacific", 2025-05-19 22:25:52.543099 | orchestrator | "elector-pinging", 2025-05-19 22:25:52.543110 | orchestrator | "quincy", 2025-05-19 22:25:52.543120 | orchestrator | "reef" 2025-05-19 22:25:52.543131 | orchestrator | ], 2025-05-19 22:25:52.543161 | orchestrator | "optional": [] 2025-05-19 22:25:52.543180 | orchestrator | }, 2025-05-19 22:25:52.543214 | orchestrator | "mons": [ 2025-05-19 22:25:52.543232 | orchestrator | { 2025-05-19 22:25:52.543251 | orchestrator | "rank": 0, 2025-05-19 22:25:52.543269 | orchestrator | "name": "testbed-node-0", 2025-05-19 22:25:52.543287 | orchestrator | "public_addrs": { 2025-05-19 22:25:52.543306 | orchestrator | "addrvec": [ 2025-05-19 22:25:52.543324 | orchestrator | { 2025-05-19 22:25:52.543352 | orchestrator | "type": "v2", 2025-05-19 22:25:52.543372 | orchestrator | "addr": "192.168.16.10:3300", 2025-05-19 22:25:52.543391 | orchestrator | "nonce": 0 2025-05-19 22:25:52.543410 | orchestrator | }, 2025-05-19 22:25:52.543428 | orchestrator | { 2025-05-19 22:25:52.543439 | orchestrator | "type": "v1", 2025-05-19 22:25:52.543450 | orchestrator | "addr": "192.168.16.10:6789", 2025-05-19 22:25:52.543461 | orchestrator | "nonce": 0 2025-05-19 22:25:52.543471 | orchestrator | } 2025-05-19 22:25:52.543482 | orchestrator | ] 2025-05-19 22:25:52.543493 | orchestrator | }, 2025-05-19 22:25:52.543504 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-05-19 22:25:52.543515 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-05-19 22:25:52.543526 | orchestrator | "priority": 0, 2025-05-19 22:25:52.543536 | orchestrator | "weight": 0, 2025-05-19 22:25:52.543547 | orchestrator | "crush_location": "{}" 2025-05-19 22:25:52.543558 | orchestrator | }, 2025-05-19 22:25:52.543569 | orchestrator | { 2025-05-19 22:25:52.543586 | orchestrator | "rank": 1, 2025-05-19 22:25:52.543606 | orchestrator | "name": "testbed-node-1", 2025-05-19 22:25:52.543624 | orchestrator | "public_addrs": { 2025-05-19 22:25:52.543643 | orchestrator | "addrvec": [ 2025-05-19 22:25:52.543685 | orchestrator | { 2025-05-19 22:25:52.543706 | orchestrator | "type": "v2", 2025-05-19 22:25:52.543726 | orchestrator | "addr": "192.168.16.11:3300", 2025-05-19 22:25:52.543745 | orchestrator | "nonce": 0 2025-05-19 22:25:52.543762 | orchestrator | }, 2025-05-19 22:25:52.543782 | orchestrator | { 2025-05-19 22:25:52.543800 | orchestrator | "type": "v1", 2025-05-19 22:25:52.543819 | orchestrator | "addr": "192.168.16.11:6789", 2025-05-19 22:25:52.543837 | orchestrator | "nonce": 0 2025-05-19 22:25:52.543856 | orchestrator | } 2025-05-19 22:25:52.543876 | orchestrator | ] 2025-05-19 22:25:52.543894 | orchestrator | }, 2025-05-19 22:25:52.543913 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-05-19 22:25:52.543933 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-05-19 22:25:52.543952 | orchestrator | "priority": 0, 2025-05-19 22:25:52.543971 | orchestrator | "weight": 0, 2025-05-19 22:25:52.543990 | orchestrator | "crush_location": "{}" 2025-05-19 22:25:52.544008 | orchestrator | }, 2025-05-19 22:25:52.544027 | orchestrator | { 2025-05-19 22:25:52.544047 | orchestrator | "rank": 2, 2025-05-19 22:25:52.544066 | orchestrator | "name": "testbed-node-2", 2025-05-19 22:25:52.544085 | orchestrator | "public_addrs": { 2025-05-19 22:25:52.544098 | orchestrator | "addrvec": [ 2025-05-19 22:25:52.544109 | orchestrator | { 2025-05-19 22:25:52.544120 | orchestrator | "type": "v2", 2025-05-19 22:25:52.544131 | orchestrator | "addr": "192.168.16.12:3300", 2025-05-19 22:25:52.544141 | orchestrator | "nonce": 0 2025-05-19 22:25:52.544152 | orchestrator | }, 2025-05-19 22:25:52.544163 | orchestrator | { 2025-05-19 22:25:52.544173 | orchestrator | "type": "v1", 2025-05-19 22:25:52.544184 | orchestrator | "addr": "192.168.16.12:6789", 2025-05-19 22:25:52.544194 | orchestrator | "nonce": 0 2025-05-19 22:25:52.544208 | orchestrator | } 2025-05-19 22:25:52.544227 | orchestrator | ] 2025-05-19 22:25:52.544245 | orchestrator | }, 2025-05-19 22:25:52.544264 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-05-19 22:25:52.544284 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-05-19 22:25:52.544301 | orchestrator | "priority": 0, 2025-05-19 22:25:52.544320 | orchestrator | "weight": 0, 2025-05-19 22:25:52.544338 | orchestrator | "crush_location": "{}" 2025-05-19 22:25:52.544356 | orchestrator | } 2025-05-19 22:25:52.544375 | orchestrator | ] 2025-05-19 22:25:52.544393 | orchestrator | } 2025-05-19 22:25:52.544428 | orchestrator | } 2025-05-19 22:25:52.544448 | orchestrator | 2025-05-19 22:25:52.544461 | orchestrator | # Ceph free space status 2025-05-19 22:25:52.544472 | orchestrator | 2025-05-19 22:25:52.544483 | orchestrator | + echo 2025-05-19 22:25:52.544494 | orchestrator | + echo '# Ceph free space status' 2025-05-19 22:25:52.544505 | orchestrator | + echo 2025-05-19 22:25:52.544516 | orchestrator | + ceph df 2025-05-19 22:25:53.171513 | orchestrator | --- RAW STORAGE --- 2025-05-19 22:25:53.171624 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-05-19 22:25:53.171653 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-19 22:25:53.171709 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-05-19 22:25:53.171721 | orchestrator | 2025-05-19 22:25:53.171733 | orchestrator | --- POOLS --- 2025-05-19 22:25:53.171745 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-05-19 22:25:53.171759 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-05-19 22:25:53.171770 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-05-19 22:25:53.171782 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-05-19 22:25:53.171793 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-05-19 22:25:53.171805 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-05-19 22:25:53.171816 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-05-19 22:25:53.171827 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-05-19 22:25:53.171838 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-05-19 22:25:53.171849 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-05-19 22:25:53.171860 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 22:25:53.171871 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 22:25:53.171882 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2025-05-19 22:25:53.171893 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 22:25:53.171904 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-05-19 22:25:53.227898 | orchestrator | ++ semver latest 5.0.0 2025-05-19 22:25:53.277281 | orchestrator | + [[ -1 -eq -1 ]] 2025-05-19 22:25:53.277383 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-19 22:25:53.277400 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-05-19 22:25:53.277413 | orchestrator | + osism apply facts 2025-05-19 22:25:55.261375 | orchestrator | 2025-05-19 22:25:55 | INFO  | Task fdaa111d-7696-423e-a5d2-d55bde47e433 (facts) was prepared for execution. 2025-05-19 22:25:55.261471 | orchestrator | 2025-05-19 22:25:55 | INFO  | It takes a moment until task fdaa111d-7696-423e-a5d2-d55bde47e433 (facts) has been started and output is visible here. 2025-05-19 22:25:59.844117 | orchestrator | 2025-05-19 22:25:59.848154 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-19 22:25:59.848747 | orchestrator | 2025-05-19 22:25:59.848887 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-19 22:25:59.849163 | orchestrator | Monday 19 May 2025 22:25:59 +0000 (0:00:00.303) 0:00:00.303 ************ 2025-05-19 22:26:00.615986 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:01.472814 | orchestrator | ok: [testbed-manager] 2025-05-19 22:26:01.473550 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:01.474960 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:01.476154 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:26:01.477611 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:26:01.479037 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:26:01.479785 | orchestrator | 2025-05-19 22:26:01.480757 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-19 22:26:01.481511 | orchestrator | Monday 19 May 2025 22:26:01 +0000 (0:00:01.627) 0:00:01.930 ************ 2025-05-19 22:26:01.682290 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:26:01.779612 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:01.870545 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:01.949491 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:02.038232 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:26:02.872625 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:26:02.873100 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:26:02.873758 | orchestrator | 2025-05-19 22:26:02.876442 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-19 22:26:02.877069 | orchestrator | 2025-05-19 22:26:02.878585 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-19 22:26:02.879456 | orchestrator | Monday 19 May 2025 22:26:02 +0000 (0:00:01.406) 0:00:03.336 ************ 2025-05-19 22:26:08.221212 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:08.221804 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:08.223240 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:08.226803 | orchestrator | ok: [testbed-manager] 2025-05-19 22:26:08.227264 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:26:08.228087 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:26:08.229357 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:26:08.229779 | orchestrator | 2025-05-19 22:26:08.231150 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-19 22:26:08.231537 | orchestrator | 2025-05-19 22:26:08.232683 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-19 22:26:08.233736 | orchestrator | Monday 19 May 2025 22:26:08 +0000 (0:00:05.348) 0:00:08.685 ************ 2025-05-19 22:26:08.372192 | orchestrator | skipping: [testbed-manager] 2025-05-19 22:26:08.441930 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:08.516709 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:08.593240 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:08.675755 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:26:08.726634 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:26:08.727196 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:26:08.729074 | orchestrator | 2025-05-19 22:26:08.730189 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:26:08.730970 | orchestrator | 2025-05-19 22:26:08 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 22:26:08.731462 | orchestrator | 2025-05-19 22:26:08 | INFO  | Please wait and do not abort execution. 2025-05-19 22:26:08.732462 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.733713 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.734894 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.736005 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.736871 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.737626 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.738247 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:08.739577 | orchestrator | 2025-05-19 22:26:08.740190 | orchestrator | 2025-05-19 22:26:08.740894 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:26:08.741763 | orchestrator | Monday 19 May 2025 22:26:08 +0000 (0:00:00.504) 0:00:09.190 ************ 2025-05-19 22:26:08.742856 | orchestrator | =============================================================================== 2025-05-19 22:26:08.743637 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2025-05-19 22:26:08.744334 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.63s 2025-05-19 22:26:08.744800 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2025-05-19 22:26:08.745884 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-05-19 22:26:09.575027 | orchestrator | + osism validate ceph-mons 2025-05-19 22:26:31.016782 | orchestrator | 2025-05-19 22:26:31.016924 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-05-19 22:26:31.016942 | orchestrator | 2025-05-19 22:26:31.016956 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 22:26:31.016970 | orchestrator | Monday 19 May 2025 22:26:15 +0000 (0:00:00.449) 0:00:00.449 ************ 2025-05-19 22:26:31.016983 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.016996 | orchestrator | 2025-05-19 22:26:31.017008 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 22:26:31.017021 | orchestrator | Monday 19 May 2025 22:26:16 +0000 (0:00:00.630) 0:00:01.079 ************ 2025-05-19 22:26:31.017033 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.017045 | orchestrator | 2025-05-19 22:26:31.017085 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 22:26:31.017100 | orchestrator | Monday 19 May 2025 22:26:16 +0000 (0:00:00.775) 0:00:01.854 ************ 2025-05-19 22:26:31.017112 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017126 | orchestrator | 2025-05-19 22:26:31.017140 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-19 22:26:31.017157 | orchestrator | Monday 19 May 2025 22:26:17 +0000 (0:00:00.262) 0:00:02.116 ************ 2025-05-19 22:26:31.017165 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017173 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:31.017182 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:31.017190 | orchestrator | 2025-05-19 22:26:31.017198 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-19 22:26:31.017206 | orchestrator | Monday 19 May 2025 22:26:17 +0000 (0:00:00.281) 0:00:02.398 ************ 2025-05-19 22:26:31.017215 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017222 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:31.017230 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:31.017238 | orchestrator | 2025-05-19 22:26:31.017246 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-19 22:26:31.017254 | orchestrator | Monday 19 May 2025 22:26:18 +0000 (0:00:00.986) 0:00:03.385 ************ 2025-05-19 22:26:31.017262 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017271 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:31.017279 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:31.017287 | orchestrator | 2025-05-19 22:26:31.017295 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-19 22:26:31.017303 | orchestrator | Monday 19 May 2025 22:26:18 +0000 (0:00:00.305) 0:00:03.690 ************ 2025-05-19 22:26:31.017311 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017319 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:31.017327 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:31.017336 | orchestrator | 2025-05-19 22:26:31.017344 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:26:31.017352 | orchestrator | Monday 19 May 2025 22:26:19 +0000 (0:00:00.565) 0:00:04.256 ************ 2025-05-19 22:26:31.017360 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017368 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:31.017376 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:31.017383 | orchestrator | 2025-05-19 22:26:31.017392 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-05-19 22:26:31.017400 | orchestrator | Monday 19 May 2025 22:26:19 +0000 (0:00:00.352) 0:00:04.608 ************ 2025-05-19 22:26:31.017432 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017441 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:31.017449 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:31.017457 | orchestrator | 2025-05-19 22:26:31.017465 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-05-19 22:26:31.017473 | orchestrator | Monday 19 May 2025 22:26:19 +0000 (0:00:00.295) 0:00:04.904 ************ 2025-05-19 22:26:31.017481 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017489 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:31.017497 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:31.017504 | orchestrator | 2025-05-19 22:26:31.017513 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:26:31.017520 | orchestrator | Monday 19 May 2025 22:26:20 +0000 (0:00:00.343) 0:00:05.247 ************ 2025-05-19 22:26:31.017528 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017536 | orchestrator | 2025-05-19 22:26:31.017544 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:26:31.017552 | orchestrator | Monday 19 May 2025 22:26:20 +0000 (0:00:00.802) 0:00:06.050 ************ 2025-05-19 22:26:31.017560 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017568 | orchestrator | 2025-05-19 22:26:31.017575 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:26:31.017583 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.248) 0:00:06.298 ************ 2025-05-19 22:26:31.017591 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017599 | orchestrator | 2025-05-19 22:26:31.017607 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:31.017616 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.255) 0:00:06.554 ************ 2025-05-19 22:26:31.017692 | orchestrator | 2025-05-19 22:26:31.017707 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:31.017720 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.078) 0:00:06.632 ************ 2025-05-19 22:26:31.017732 | orchestrator | 2025-05-19 22:26:31.017745 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:31.017757 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.070) 0:00:06.703 ************ 2025-05-19 22:26:31.017770 | orchestrator | 2025-05-19 22:26:31.017783 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:26:31.017796 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.076) 0:00:06.780 ************ 2025-05-19 22:26:31.017810 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017823 | orchestrator | 2025-05-19 22:26:31.017836 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-19 22:26:31.017850 | orchestrator | Monday 19 May 2025 22:26:21 +0000 (0:00:00.262) 0:00:07.042 ************ 2025-05-19 22:26:31.017863 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.017875 | orchestrator | 2025-05-19 22:26:31.017910 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-05-19 22:26:31.017923 | orchestrator | Monday 19 May 2025 22:26:22 +0000 (0:00:00.255) 0:00:07.297 ************ 2025-05-19 22:26:31.017936 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.017949 | orchestrator | 2025-05-19 22:26:31.017962 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-05-19 22:26:31.017975 | orchestrator | Monday 19 May 2025 22:26:22 +0000 (0:00:00.108) 0:00:07.405 ************ 2025-05-19 22:26:31.017988 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:26:31.018002 | orchestrator | 2025-05-19 22:26:31.018096 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-05-19 22:26:31.018111 | orchestrator | Monday 19 May 2025 22:26:23 +0000 (0:00:01.540) 0:00:08.946 ************ 2025-05-19 22:26:31.018125 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018138 | orchestrator | 2025-05-19 22:26:31.018152 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-05-19 22:26:31.018180 | orchestrator | Monday 19 May 2025 22:26:24 +0000 (0:00:00.268) 0:00:09.214 ************ 2025-05-19 22:26:31.018189 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018197 | orchestrator | 2025-05-19 22:26:31.018206 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-05-19 22:26:31.018220 | orchestrator | Monday 19 May 2025 22:26:24 +0000 (0:00:00.427) 0:00:09.642 ************ 2025-05-19 22:26:31.018228 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018236 | orchestrator | 2025-05-19 22:26:31.018244 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-05-19 22:26:31.018252 | orchestrator | Monday 19 May 2025 22:26:24 +0000 (0:00:00.247) 0:00:09.889 ************ 2025-05-19 22:26:31.018260 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018268 | orchestrator | 2025-05-19 22:26:31.018276 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-05-19 22:26:31.018284 | orchestrator | Monday 19 May 2025 22:26:25 +0000 (0:00:00.243) 0:00:10.132 ************ 2025-05-19 22:26:31.018292 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018300 | orchestrator | 2025-05-19 22:26:31.018308 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-05-19 22:26:31.018316 | orchestrator | Monday 19 May 2025 22:26:25 +0000 (0:00:00.113) 0:00:10.246 ************ 2025-05-19 22:26:31.018324 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018332 | orchestrator | 2025-05-19 22:26:31.018340 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-05-19 22:26:31.018348 | orchestrator | Monday 19 May 2025 22:26:25 +0000 (0:00:00.158) 0:00:10.404 ************ 2025-05-19 22:26:31.018356 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018364 | orchestrator | 2025-05-19 22:26:31.018372 | orchestrator | TASK [Gather status data] ****************************************************** 2025-05-19 22:26:31.018380 | orchestrator | Monday 19 May 2025 22:26:25 +0000 (0:00:00.131) 0:00:10.535 ************ 2025-05-19 22:26:31.018388 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:26:31.018396 | orchestrator | 2025-05-19 22:26:31.018404 | orchestrator | TASK [Set health test data] **************************************************** 2025-05-19 22:26:31.018412 | orchestrator | Monday 19 May 2025 22:26:26 +0000 (0:00:01.338) 0:00:11.874 ************ 2025-05-19 22:26:31.018420 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018428 | orchestrator | 2025-05-19 22:26:31.018436 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-05-19 22:26:31.018444 | orchestrator | Monday 19 May 2025 22:26:27 +0000 (0:00:00.255) 0:00:12.129 ************ 2025-05-19 22:26:31.018452 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018460 | orchestrator | 2025-05-19 22:26:31.018468 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-05-19 22:26:31.018476 | orchestrator | Monday 19 May 2025 22:26:27 +0000 (0:00:00.140) 0:00:12.269 ************ 2025-05-19 22:26:31.018484 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:31.018492 | orchestrator | 2025-05-19 22:26:31.018500 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-05-19 22:26:31.018508 | orchestrator | Monday 19 May 2025 22:26:27 +0000 (0:00:00.154) 0:00:12.424 ************ 2025-05-19 22:26:31.018516 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018524 | orchestrator | 2025-05-19 22:26:31.018532 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-05-19 22:26:31.018540 | orchestrator | Monday 19 May 2025 22:26:27 +0000 (0:00:00.138) 0:00:12.563 ************ 2025-05-19 22:26:31.018548 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018556 | orchestrator | 2025-05-19 22:26:31.018564 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 22:26:31.018572 | orchestrator | Monday 19 May 2025 22:26:27 +0000 (0:00:00.406) 0:00:12.970 ************ 2025-05-19 22:26:31.018580 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.018588 | orchestrator | 2025-05-19 22:26:31.018596 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 22:26:31.018612 | orchestrator | Monday 19 May 2025 22:26:28 +0000 (0:00:00.296) 0:00:13.266 ************ 2025-05-19 22:26:31.018620 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:31.018628 | orchestrator | 2025-05-19 22:26:31.018676 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:26:31.018686 | orchestrator | Monday 19 May 2025 22:26:28 +0000 (0:00:00.260) 0:00:13.527 ************ 2025-05-19 22:26:31.018694 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.018702 | orchestrator | 2025-05-19 22:26:31.018710 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:26:31.018724 | orchestrator | Monday 19 May 2025 22:26:30 +0000 (0:00:01.742) 0:00:15.270 ************ 2025-05-19 22:26:31.018732 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.018740 | orchestrator | 2025-05-19 22:26:31.018748 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:26:31.018756 | orchestrator | Monday 19 May 2025 22:26:30 +0000 (0:00:00.298) 0:00:15.568 ************ 2025-05-19 22:26:31.018764 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:31.018771 | orchestrator | 2025-05-19 22:26:31.018789 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:33.570541 | orchestrator | Monday 19 May 2025 22:26:30 +0000 (0:00:00.285) 0:00:15.853 ************ 2025-05-19 22:26:33.570683 | orchestrator | 2025-05-19 22:26:33.570701 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:33.570713 | orchestrator | Monday 19 May 2025 22:26:30 +0000 (0:00:00.071) 0:00:15.925 ************ 2025-05-19 22:26:33.570724 | orchestrator | 2025-05-19 22:26:33.570736 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:33.570747 | orchestrator | Monday 19 May 2025 22:26:30 +0000 (0:00:00.075) 0:00:16.001 ************ 2025-05-19 22:26:33.570758 | orchestrator | 2025-05-19 22:26:33.570769 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 22:26:33.570780 | orchestrator | Monday 19 May 2025 22:26:31 +0000 (0:00:00.074) 0:00:16.075 ************ 2025-05-19 22:26:33.570791 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:33.570802 | orchestrator | 2025-05-19 22:26:33.570813 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:26:33.570824 | orchestrator | Monday 19 May 2025 22:26:32 +0000 (0:00:01.646) 0:00:17.722 ************ 2025-05-19 22:26:33.570835 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-19 22:26:33.570846 | orchestrator |  "msg": [ 2025-05-19 22:26:33.570859 | orchestrator |  "Validator run completed.", 2025-05-19 22:26:33.570870 | orchestrator |  "You can find the report file here:", 2025-05-19 22:26:33.570881 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-05-19T22:26:15+00:00-report.json", 2025-05-19 22:26:33.570893 | orchestrator |  "on the following host:", 2025-05-19 22:26:33.570904 | orchestrator |  "testbed-manager" 2025-05-19 22:26:33.570915 | orchestrator |  ] 2025-05-19 22:26:33.570926 | orchestrator | } 2025-05-19 22:26:33.570937 | orchestrator | 2025-05-19 22:26:33.570948 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:26:33.570960 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-19 22:26:33.570973 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:33.570984 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:33.570995 | orchestrator | 2025-05-19 22:26:33.571006 | orchestrator | 2025-05-19 22:26:33.571017 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:26:33.571051 | orchestrator | Monday 19 May 2025 22:26:33 +0000 (0:00:00.662) 0:00:18.384 ************ 2025-05-19 22:26:33.571063 | orchestrator | =============================================================================== 2025-05-19 22:26:33.571074 | orchestrator | Aggregate test results step one ----------------------------------------- 1.74s 2025-05-19 22:26:33.571085 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2025-05-19 22:26:33.571110 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.54s 2025-05-19 22:26:33.571121 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-05-19 22:26:33.571132 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-05-19 22:26:33.571143 | orchestrator | Aggregate test results step one ----------------------------------------- 0.80s 2025-05-19 22:26:33.571154 | orchestrator | Create report output directory ------------------------------------------ 0.78s 2025-05-19 22:26:33.571166 | orchestrator | Print report file information ------------------------------------------- 0.66s 2025-05-19 22:26:33.571177 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-05-19 22:26:33.571188 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2025-05-19 22:26:33.571199 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.43s 2025-05-19 22:26:33.571210 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.41s 2025-05-19 22:26:33.571221 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-05-19 22:26:33.571232 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.34s 2025-05-19 22:26:33.571243 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-05-19 22:26:33.571254 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-05-19 22:26:33.571265 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2025-05-19 22:26:33.571276 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-05-19 22:26:33.571287 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2025-05-19 22:26:33.571298 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-05-19 22:26:33.761067 | orchestrator | + osism validate ceph-mgrs 2025-05-19 22:26:54.497587 | orchestrator | 2025-05-19 22:26:54.497697 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-05-19 22:26:54.497704 | orchestrator | 2025-05-19 22:26:54.497709 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 22:26:54.497714 | orchestrator | Monday 19 May 2025 22:26:39 +0000 (0:00:00.469) 0:00:00.469 ************ 2025-05-19 22:26:54.497719 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.497723 | orchestrator | 2025-05-19 22:26:54.497728 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 22:26:54.497732 | orchestrator | Monday 19 May 2025 22:26:40 +0000 (0:00:00.634) 0:00:01.104 ************ 2025-05-19 22:26:54.497736 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.497740 | orchestrator | 2025-05-19 22:26:54.497744 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 22:26:54.497747 | orchestrator | Monday 19 May 2025 22:26:41 +0000 (0:00:00.859) 0:00:01.963 ************ 2025-05-19 22:26:54.497751 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497756 | orchestrator | 2025-05-19 22:26:54.497760 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-05-19 22:26:54.497764 | orchestrator | Monday 19 May 2025 22:26:41 +0000 (0:00:00.272) 0:00:02.236 ************ 2025-05-19 22:26:54.497768 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497772 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:54.497776 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:54.497780 | orchestrator | 2025-05-19 22:26:54.497798 | orchestrator | TASK [Get container info] ****************************************************** 2025-05-19 22:26:54.497802 | orchestrator | Monday 19 May 2025 22:26:41 +0000 (0:00:00.308) 0:00:02.544 ************ 2025-05-19 22:26:54.497806 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:54.497810 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497814 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:54.497817 | orchestrator | 2025-05-19 22:26:54.497832 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-05-19 22:26:54.497836 | orchestrator | Monday 19 May 2025 22:26:42 +0000 (0:00:00.979) 0:00:03.524 ************ 2025-05-19 22:26:54.497840 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.497844 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:54.497848 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:54.497851 | orchestrator | 2025-05-19 22:26:54.497855 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-05-19 22:26:54.497859 | orchestrator | Monday 19 May 2025 22:26:42 +0000 (0:00:00.273) 0:00:03.797 ************ 2025-05-19 22:26:54.497863 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497867 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:54.497870 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:54.497874 | orchestrator | 2025-05-19 22:26:54.497878 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:26:54.497882 | orchestrator | Monday 19 May 2025 22:26:43 +0000 (0:00:00.435) 0:00:04.233 ************ 2025-05-19 22:26:54.497885 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497889 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:54.497893 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:54.497897 | orchestrator | 2025-05-19 22:26:54.497900 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-05-19 22:26:54.497904 | orchestrator | Monday 19 May 2025 22:26:43 +0000 (0:00:00.286) 0:00:04.519 ************ 2025-05-19 22:26:54.497908 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.497912 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:26:54.497916 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:26:54.497919 | orchestrator | 2025-05-19 22:26:54.497923 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-05-19 22:26:54.497927 | orchestrator | Monday 19 May 2025 22:26:43 +0000 (0:00:00.293) 0:00:04.813 ************ 2025-05-19 22:26:54.497931 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.497935 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:26:54.497938 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:26:54.497942 | orchestrator | 2025-05-19 22:26:54.497946 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:26:54.497950 | orchestrator | Monday 19 May 2025 22:26:44 +0000 (0:00:00.304) 0:00:05.117 ************ 2025-05-19 22:26:54.497954 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.497958 | orchestrator | 2025-05-19 22:26:54.497961 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:26:54.497965 | orchestrator | Monday 19 May 2025 22:26:44 +0000 (0:00:00.773) 0:00:05.891 ************ 2025-05-19 22:26:54.497969 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.497973 | orchestrator | 2025-05-19 22:26:54.497977 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:26:54.497980 | orchestrator | Monday 19 May 2025 22:26:45 +0000 (0:00:00.291) 0:00:06.183 ************ 2025-05-19 22:26:54.497984 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.497988 | orchestrator | 2025-05-19 22:26:54.497992 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.497996 | orchestrator | Monday 19 May 2025 22:26:45 +0000 (0:00:00.256) 0:00:06.439 ************ 2025-05-19 22:26:54.498000 | orchestrator | 2025-05-19 22:26:54.498004 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.498008 | orchestrator | Monday 19 May 2025 22:26:45 +0000 (0:00:00.088) 0:00:06.528 ************ 2025-05-19 22:26:54.498011 | orchestrator | 2025-05-19 22:26:54.498047 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.498062 | orchestrator | Monday 19 May 2025 22:26:45 +0000 (0:00:00.074) 0:00:06.602 ************ 2025-05-19 22:26:54.498066 | orchestrator | 2025-05-19 22:26:54.498070 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:26:54.498074 | orchestrator | Monday 19 May 2025 22:26:45 +0000 (0:00:00.074) 0:00:06.677 ************ 2025-05-19 22:26:54.498078 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.498082 | orchestrator | 2025-05-19 22:26:54.498085 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-05-19 22:26:54.498089 | orchestrator | Monday 19 May 2025 22:26:46 +0000 (0:00:00.261) 0:00:06.938 ************ 2025-05-19 22:26:54.498093 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.498097 | orchestrator | 2025-05-19 22:26:54.498112 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-05-19 22:26:54.498116 | orchestrator | Monday 19 May 2025 22:26:46 +0000 (0:00:00.236) 0:00:07.174 ************ 2025-05-19 22:26:54.498119 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.498123 | orchestrator | 2025-05-19 22:26:54.498127 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-05-19 22:26:54.498131 | orchestrator | Monday 19 May 2025 22:26:46 +0000 (0:00:00.115) 0:00:07.290 ************ 2025-05-19 22:26:54.498135 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:26:54.498138 | orchestrator | 2025-05-19 22:26:54.498142 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-05-19 22:26:54.498146 | orchestrator | Monday 19 May 2025 22:26:48 +0000 (0:00:01.852) 0:00:09.143 ************ 2025-05-19 22:26:54.498150 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.498153 | orchestrator | 2025-05-19 22:26:54.498157 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-05-19 22:26:54.498161 | orchestrator | Monday 19 May 2025 22:26:48 +0000 (0:00:00.267) 0:00:09.411 ************ 2025-05-19 22:26:54.498164 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.498168 | orchestrator | 2025-05-19 22:26:54.498172 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-05-19 22:26:54.498176 | orchestrator | Monday 19 May 2025 22:26:49 +0000 (0:00:00.711) 0:00:10.122 ************ 2025-05-19 22:26:54.498179 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.498183 | orchestrator | 2025-05-19 22:26:54.498187 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-05-19 22:26:54.498191 | orchestrator | Monday 19 May 2025 22:26:49 +0000 (0:00:00.154) 0:00:10.277 ************ 2025-05-19 22:26:54.498194 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:26:54.498198 | orchestrator | 2025-05-19 22:26:54.498202 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 22:26:54.498210 | orchestrator | Monday 19 May 2025 22:26:49 +0000 (0:00:00.157) 0:00:10.434 ************ 2025-05-19 22:26:54.498214 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.498217 | orchestrator | 2025-05-19 22:26:54.498221 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 22:26:54.498225 | orchestrator | Monday 19 May 2025 22:26:49 +0000 (0:00:00.267) 0:00:10.701 ************ 2025-05-19 22:26:54.498229 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:26:54.498232 | orchestrator | 2025-05-19 22:26:54.498236 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:26:54.498240 | orchestrator | Monday 19 May 2025 22:26:50 +0000 (0:00:00.245) 0:00:10.947 ************ 2025-05-19 22:26:54.498244 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.498247 | orchestrator | 2025-05-19 22:26:54.498251 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:26:54.498255 | orchestrator | Monday 19 May 2025 22:26:51 +0000 (0:00:01.352) 0:00:12.299 ************ 2025-05-19 22:26:54.498259 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.498262 | orchestrator | 2025-05-19 22:26:54.498269 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:26:54.498273 | orchestrator | Monday 19 May 2025 22:26:51 +0000 (0:00:00.296) 0:00:12.595 ************ 2025-05-19 22:26:54.498277 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.498280 | orchestrator | 2025-05-19 22:26:54.498284 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.498288 | orchestrator | Monday 19 May 2025 22:26:51 +0000 (0:00:00.255) 0:00:12.851 ************ 2025-05-19 22:26:54.498292 | orchestrator | 2025-05-19 22:26:54.498295 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.498299 | orchestrator | Monday 19 May 2025 22:26:52 +0000 (0:00:00.071) 0:00:12.922 ************ 2025-05-19 22:26:54.498303 | orchestrator | 2025-05-19 22:26:54.498307 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:26:54.498310 | orchestrator | Monday 19 May 2025 22:26:52 +0000 (0:00:00.067) 0:00:12.990 ************ 2025-05-19 22:26:54.498314 | orchestrator | 2025-05-19 22:26:54.498318 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 22:26:54.498322 | orchestrator | Monday 19 May 2025 22:26:52 +0000 (0:00:00.072) 0:00:13.062 ************ 2025-05-19 22:26:54.498325 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-05-19 22:26:54.498329 | orchestrator | 2025-05-19 22:26:54.498333 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:26:54.498336 | orchestrator | Monday 19 May 2025 22:26:54 +0000 (0:00:01.909) 0:00:14.971 ************ 2025-05-19 22:26:54.498340 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-05-19 22:26:54.498344 | orchestrator |  "msg": [ 2025-05-19 22:26:54.498348 | orchestrator |  "Validator run completed.", 2025-05-19 22:26:54.498352 | orchestrator |  "You can find the report file here:", 2025-05-19 22:26:54.498356 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-05-19T22:26:40+00:00-report.json", 2025-05-19 22:26:54.498360 | orchestrator |  "on the following host:", 2025-05-19 22:26:54.498364 | orchestrator |  "testbed-manager" 2025-05-19 22:26:54.498368 | orchestrator |  ] 2025-05-19 22:26:54.498372 | orchestrator | } 2025-05-19 22:26:54.498376 | orchestrator | 2025-05-19 22:26:54.498380 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:26:54.498384 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 22:26:54.498389 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:54.498397 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:26:54.908382 | orchestrator | 2025-05-19 22:26:54.908537 | orchestrator | 2025-05-19 22:26:54.908553 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:26:54.908568 | orchestrator | Monday 19 May 2025 22:26:54 +0000 (0:00:00.408) 0:00:15.380 ************ 2025-05-19 22:26:54.908580 | orchestrator | =============================================================================== 2025-05-19 22:26:54.908591 | orchestrator | Write report file ------------------------------------------------------- 1.91s 2025-05-19 22:26:54.908602 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.85s 2025-05-19 22:26:54.908613 | orchestrator | Aggregate test results step one ----------------------------------------- 1.35s 2025-05-19 22:26:54.908718 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-05-19 22:26:54.908736 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-05-19 22:26:54.908748 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-05-19 22:26:54.908759 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.71s 2025-05-19 22:26:54.908796 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-05-19 22:26:54.908808 | orchestrator | Set test result to passed if container is existing ---------------------- 0.44s 2025-05-19 22:26:54.908819 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-05-19 22:26:54.908829 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-05-19 22:26:54.908840 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-05-19 22:26:54.908851 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2025-05-19 22:26:54.908862 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-05-19 22:26:54.908872 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2025-05-19 22:26:54.908902 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-05-19 22:26:54.908916 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-05-19 22:26:54.908928 | orchestrator | Define report vars ------------------------------------------------------ 0.27s 2025-05-19 22:26:54.908980 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2025-05-19 22:26:54.908994 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-05-19 22:26:55.253028 | orchestrator | + osism validate ceph-osds 2025-05-19 22:27:06.836399 | orchestrator | 2025-05-19 22:27:06.836521 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-05-19 22:27:06.836539 | orchestrator | 2025-05-19 22:27:06.836551 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-05-19 22:27:06.836563 | orchestrator | Monday 19 May 2025 22:27:01 +0000 (0:00:00.453) 0:00:00.453 ************ 2025-05-19 22:27:06.836574 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:06.836586 | orchestrator | 2025-05-19 22:27:06.836597 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-19 22:27:06.836608 | orchestrator | Monday 19 May 2025 22:27:02 +0000 (0:00:00.714) 0:00:01.168 ************ 2025-05-19 22:27:06.836670 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:06.836682 | orchestrator | 2025-05-19 22:27:06.836692 | orchestrator | TASK [Create report output directory] ****************************************** 2025-05-19 22:27:06.836704 | orchestrator | Monday 19 May 2025 22:27:03 +0000 (0:00:00.445) 0:00:01.613 ************ 2025-05-19 22:27:06.836716 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:06.836727 | orchestrator | 2025-05-19 22:27:06.836738 | orchestrator | TASK [Define report vars] ****************************************************** 2025-05-19 22:27:06.836749 | orchestrator | Monday 19 May 2025 22:27:04 +0000 (0:00:01.056) 0:00:02.670 ************ 2025-05-19 22:27:06.836761 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:06.836774 | orchestrator | 2025-05-19 22:27:06.836785 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-19 22:27:06.836796 | orchestrator | Monday 19 May 2025 22:27:04 +0000 (0:00:00.162) 0:00:02.832 ************ 2025-05-19 22:27:06.836807 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:06.836819 | orchestrator | 2025-05-19 22:27:06.836830 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-19 22:27:06.836841 | orchestrator | Monday 19 May 2025 22:27:04 +0000 (0:00:00.164) 0:00:02.997 ************ 2025-05-19 22:27:06.836852 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:06.836864 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:06.836875 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:06.836886 | orchestrator | 2025-05-19 22:27:06.836897 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-05-19 22:27:06.836908 | orchestrator | Monday 19 May 2025 22:27:04 +0000 (0:00:00.316) 0:00:03.313 ************ 2025-05-19 22:27:06.836921 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:06.836958 | orchestrator | 2025-05-19 22:27:06.836972 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-05-19 22:27:06.836984 | orchestrator | Monday 19 May 2025 22:27:04 +0000 (0:00:00.170) 0:00:03.484 ************ 2025-05-19 22:27:06.836997 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:06.837009 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:06.837021 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:06.837034 | orchestrator | 2025-05-19 22:27:06.837046 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-05-19 22:27:06.837058 | orchestrator | Monday 19 May 2025 22:27:05 +0000 (0:00:00.395) 0:00:03.880 ************ 2025-05-19 22:27:06.837071 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:06.837084 | orchestrator | 2025-05-19 22:27:06.837096 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:27:06.837109 | orchestrator | Monday 19 May 2025 22:27:05 +0000 (0:00:00.631) 0:00:04.511 ************ 2025-05-19 22:27:06.837121 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:06.837133 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:06.837146 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:06.837158 | orchestrator | 2025-05-19 22:27:06.837170 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-05-19 22:27:06.837183 | orchestrator | Monday 19 May 2025 22:27:06 +0000 (0:00:00.573) 0:00:05.084 ************ 2025-05-19 22:27:06.837198 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8c75765fbfed75b7c13d736033dcae4badf26f0ea6aef2d7545464ca211aa847', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:06.837214 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f0235c5014846c8a88b287f2efd30fdaf57446019b3fbdccc758b9baebffa5fe', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:06.837227 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78c587a02e385edbcf95229fd8caac920cd32b1b8a9f568b547b493964bf4121', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:06.837262 | orchestrator | skipping: [testbed-node-3] => (item={'id': '59c659defb7b15102a76acc39c55653cf2da3c100c847ffd64ef26efe2d788ea', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:06.837276 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c40f0b53825e88c4c0c5713fa403e6165b56c413aecb6b113c6d3b04e90977fa', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:06.837306 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fac5197689c6a803c954b8d58e831bb7bd2a8d816be333a9f0ac1cff12798f7a', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:06.837319 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e865f7bc417068a19f64d7fe331f5d0f42a101739c5d61d737f72ffcd7badfe8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:06.837330 | orchestrator | skipping: [testbed-node-3] => (item={'id': '08f7ccd7552745fe32a2401063993793bcbe27be2a4f40e962db0c736d6f705a', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:06.837341 | orchestrator | skipping: [testbed-node-3] => (item={'id': '125240ad5f74900f366d3e4c516a9bd67010953cfff336969d9cbc365c22198e', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-19 22:27:06.837364 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7410b6451376e5eb868d31f939ac07435ced2dcd458af533f575f7d9c69db377', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 22:27:06.837376 | orchestrator | skipping: [testbed-node-3] => (item={'id': '90612eed41c5705d772dda16ad6be388aefec4b84e42886296d4d9b3a4567fa8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 22:27:06.837387 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a7c12c85497289b04283b2c3cc72efc9f649e2d56d9ff297fbb60b4fae412572', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-19 22:27:06.837398 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd3952c021a7ea3256d2ba0f34f9065baebcc44f8d13a089e0cda89537b29dd39', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:06.837410 | orchestrator | ok: [testbed-node-3] => (item={'id': '27a36a08c377f3b1576cccef7ec80ad953bb353d7ff23b9be50ecb164f3f5bce', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:06.837421 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40fbcc5d56ff8e8a708047a228c015d1c513ac32392794ba9e5a68f1a485284e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-19 22:27:06.837433 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0db0a6ada387506d373ef189059548cac734d713373eb767194381dd71c3d979', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-19 22:27:06.837444 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6906ed9687ce9a44e630fd5dab85334f0c4f5c5458e39038e68d04d446778970', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-19 22:27:06.837456 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e278df220d2dedeaa1a4573df5b7c9c0d9556760a740139d228ad693e7903cd', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:06.837467 | orchestrator | skipping: [testbed-node-3] => (item={'id': '062e12c33b54547ccb48a953f774bc8311dfd519ea2de9ad8fa673323520e1ca', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:06.837483 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2e9516b700631729ae21916b98022d59d0f3d82b47fdec5a7e9486d84b933c0', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:06.837494 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'be1a300bb6a22bd0b0e05be0c433583661b69798400ea4e9d502c572a3f66cf6', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:06.837513 | orchestrator | skipping: [testbed-node-4] => (item={'id': '430b8e37102a43f8257c0a8d7a2a2d17ecf40576e474576b7cb0b7e568af4b1e', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:07.130309 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9bd4e270a8455d3302912e89e4cdc0e4e115e89aac233b49cb8ee7d00ae4f348', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:07.130444 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6c81d3380bc227e28e3e7b01e7e07ecffef31ec82f716277cf4ecb845e963a4c', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:07.130462 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1b0244c903d54eaab0a428b4de557a877daad0224d5e0262fc6d6977df7778c0', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:07.130475 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7bba0dbc740359f9ecd51a5df710b988e3daf7ab48d4f6ed79559e11cb4d0100', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:07.130486 | orchestrator | skipping: [testbed-node-4] => (item={'id': '570bed87e07a4c12788dc2848aae325f358786c885cc0ed11d6ace7d46c6bcf4', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:07.130498 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b9e9ae15df4c1eb7432818b32602190365aa4cd4bae7da2154c47a1361cdce34', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:07.130509 | orchestrator | skipping: [testbed-node-4] => (item={'id': '113aeb550d3540cb6540a324762fb5d977f96a79b97e432938a6fd3239983d37', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-19 22:27:07.130521 | orchestrator | skipping: [testbed-node-4] => (item={'id': '129604525aa5f897062b2b2098ba1228ac577a1d069dbe1f97ec4ae4218be187', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 22:27:07.130532 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0a3d1f5989a4ac532cd6413c18c3a4b1d17094c0604348238caf55f87c1db29a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 22:27:07.130544 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2b2db5d39cd64e592b54ad0b8e67528ed56b54203fd562ddcb3cecacb78364b4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-19 22:27:07.130557 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ef0d6c2b5675dd210c98e7fc876b14157d16660b251d239e1939d67ac7fa7b1d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:07.130570 | orchestrator | ok: [testbed-node-4] => (item={'id': '683340bb1a1f8ce4b525576ca645da7c36e19b6a3c0921d64bd7737c9325b9dd', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:07.130582 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e166c248b4623a6fd420b00017fcb2c1437a97ad8dc9e6bfacadad08d246600e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-19 22:27:07.130594 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1c67a46ad830d6c4b9f2876cb1228fb1ef4d70a426fcba1bcca40352e6afcb1e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-19 22:27:07.130673 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a84a9f92ff0d4c60c683e10c239849c0e62b777c9ea7e16c39b57fa8a6fbaedc', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-19 22:27:07.130715 | orchestrator | skipping: [testbed-node-4] => (item={'id': '62e6f1dad4ca17360a7ceb6edc3b9ff34a55864daf8341ac5404a84c090165b1', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-05-19 22:27:07.130728 | orchestrator | skipping: [testbed-node-4] => (item={'id': '678e3d5eedd61273a51d47d864b6f40e9bf2a1c85bc8035a0e18c7c56741ac93', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:07.130739 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eed1208e1ba9ac5b4e8b9fc82aea71f86d0ec9b2c9999f7a3cf0dc09f8c64968', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:07.130751 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf2f51afe74cc62a0de96e2a1161da27a4ac5e05a93b497ca5f1aca4ecda4faa', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:07.130762 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9a88a1fd54c7eaf2d19cb3b3c372e361d0a4550f1e0a40849bac39a8c294d949', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-05-19 22:27:07.130773 | orchestrator | skipping: [testbed-node-5] => (item={'id': '543754a7cc8f0b506c7ecc00213ad3595a0195f5951d4e0bc8d0286bb7ae5f6c', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:07.130785 | orchestrator | skipping: [testbed-node-5] => (item={'id': '62e47277a288c1354bcd73a91387baf19005ae6ee325d552e9e0e5b9a41bda5e', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-05-19 22:27:07.130796 | orchestrator | skipping: [testbed-node-5] => (item={'id': '07dd4777af9e4512df294818f8c56d1f71ff626cebb5dd958c8cbb59728b2f96', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:07.130807 | orchestrator | skipping: [testbed-node-5] => (item={'id': '80e38e766ad00c76090624a85fa005c23a5fbd483fb2a3c674ec5e43745b0a82', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-05-19 22:27:07.130819 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f604dd44d6faa4ac7764590b7e609cbbfd638034793d834190e0d2df61381331', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:07.130830 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4b23e7ee41451a358508335ef6aa134f5df095807a24d040101fb000fba69e87', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-05-19 22:27:07.130843 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4f22b3fdf296f7ae0a91365c15a7efdbbbef26dc19621f2f2786cc9db0ba033d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-05-19 22:27:07.130856 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c6bb89e5a26a6460e25ff05805d020eda5ea285f868f080b7537c75f5faaa9e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-05-19 22:27:07.130875 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd2179ea59bc1af36ed22655736b1db4f5a95080a8190ac17ad4ef2f440a5ebbb', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-05-19 22:27:07.130894 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'deff6cdeee65415bb5b11496351284a30208f68666ddb9ba095135a764167d0c', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-05-19 22:27:07.130915 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fdc8cbafa749ce7b9ced6b987baa1ebb08688aba8ca06adc3f31d677feb3d944', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:16.231275 | orchestrator | ok: [testbed-node-5] => (item={'id': '4dcbdbaec5dd149439411a4fed5831ef1388dae93dfbbe04ef7c2271b463ffa0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-05-19 22:27:16.231399 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9c1e57a9d8d11d879c15449579047b3dff84413acbbfdef16285083ef182819e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-05-19 22:27:16.231416 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1909e19e848c4f9b937f9d51ba7cc0d2d5eb5044b266831c309b5b3cc98f62b9', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-05-19 22:27:16.231430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a958af12da0fcd2bdba81361d672ebceaf299c86e37afc73fae9285bf803f2a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-05-19 22:27:16.231442 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6654eeb8e297e85522a3b87a20b6226d4c0879eeda5d6541ee8da1eb59acd11', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:16.231453 | orchestrator | skipping: [testbed-node-5] => (item={'id': '88b2d7e76b490886a275f75d4d6714b8da7306db599ff72c57498bde6ce3b122', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:16.231465 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e418eb4c0e8a9a027e8f076c899cfd6c91fe0cf4d783593792bc7d3b6b4ecdf2', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-05-19 22:27:16.231476 | orchestrator | 2025-05-19 22:27:16.231489 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-05-19 22:27:16.231502 | orchestrator | Monday 19 May 2025 22:27:07 +0000 (0:00:00.553) 0:00:05.637 ************ 2025-05-19 22:27:16.231513 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.231525 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.231536 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.231547 | orchestrator | 2025-05-19 22:27:16.231558 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-05-19 22:27:16.231569 | orchestrator | Monday 19 May 2025 22:27:07 +0000 (0:00:00.313) 0:00:05.951 ************ 2025-05-19 22:27:16.231580 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.231592 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:16.231635 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:16.231656 | orchestrator | 2025-05-19 22:27:16.231674 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-05-19 22:27:16.231689 | orchestrator | Monday 19 May 2025 22:27:07 +0000 (0:00:00.560) 0:00:06.512 ************ 2025-05-19 22:27:16.231701 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.231712 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.231723 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.231733 | orchestrator | 2025-05-19 22:27:16.231744 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:27:16.231755 | orchestrator | Monday 19 May 2025 22:27:08 +0000 (0:00:00.339) 0:00:06.851 ************ 2025-05-19 22:27:16.231792 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.231804 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.231817 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.231829 | orchestrator | 2025-05-19 22:27:16.231841 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-05-19 22:27:16.231853 | orchestrator | Monday 19 May 2025 22:27:08 +0000 (0:00:00.314) 0:00:07.166 ************ 2025-05-19 22:27:16.231866 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-05-19 22:27:16.231879 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-05-19 22:27:16.231892 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.231918 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-05-19 22:27:16.231929 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-05-19 22:27:16.231940 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:16.231951 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-05-19 22:27:16.231962 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-05-19 22:27:16.231973 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:16.231984 | orchestrator | 2025-05-19 22:27:16.231995 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-05-19 22:27:16.232006 | orchestrator | Monday 19 May 2025 22:27:08 +0000 (0:00:00.353) 0:00:07.520 ************ 2025-05-19 22:27:16.232017 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232028 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.232039 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.232050 | orchestrator | 2025-05-19 22:27:16.232080 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-19 22:27:16.232091 | orchestrator | Monday 19 May 2025 22:27:09 +0000 (0:00:00.589) 0:00:08.110 ************ 2025-05-19 22:27:16.232102 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232114 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:16.232124 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:16.232135 | orchestrator | 2025-05-19 22:27:16.232146 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-05-19 22:27:16.232157 | orchestrator | Monday 19 May 2025 22:27:09 +0000 (0:00:00.324) 0:00:08.435 ************ 2025-05-19 22:27:16.232168 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232179 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:16.232190 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:16.232201 | orchestrator | 2025-05-19 22:27:16.232211 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-05-19 22:27:16.232222 | orchestrator | Monday 19 May 2025 22:27:10 +0000 (0:00:00.305) 0:00:08.740 ************ 2025-05-19 22:27:16.232234 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232245 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.232256 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.232267 | orchestrator | 2025-05-19 22:27:16.232278 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:27:16.232288 | orchestrator | Monday 19 May 2025 22:27:10 +0000 (0:00:00.320) 0:00:09.061 ************ 2025-05-19 22:27:16.232299 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232310 | orchestrator | 2025-05-19 22:27:16.232321 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:27:16.232332 | orchestrator | Monday 19 May 2025 22:27:11 +0000 (0:00:00.765) 0:00:09.826 ************ 2025-05-19 22:27:16.232343 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232353 | orchestrator | 2025-05-19 22:27:16.232364 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:27:16.232375 | orchestrator | Monday 19 May 2025 22:27:11 +0000 (0:00:00.258) 0:00:10.085 ************ 2025-05-19 22:27:16.232393 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232404 | orchestrator | 2025-05-19 22:27:16.232415 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:16.232426 | orchestrator | Monday 19 May 2025 22:27:11 +0000 (0:00:00.265) 0:00:10.350 ************ 2025-05-19 22:27:16.232437 | orchestrator | 2025-05-19 22:27:16.232448 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:16.232459 | orchestrator | Monday 19 May 2025 22:27:11 +0000 (0:00:00.068) 0:00:10.419 ************ 2025-05-19 22:27:16.232470 | orchestrator | 2025-05-19 22:27:16.232481 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:16.232492 | orchestrator | Monday 19 May 2025 22:27:11 +0000 (0:00:00.070) 0:00:10.489 ************ 2025-05-19 22:27:16.232503 | orchestrator | 2025-05-19 22:27:16.232514 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:27:16.232525 | orchestrator | Monday 19 May 2025 22:27:12 +0000 (0:00:00.091) 0:00:10.580 ************ 2025-05-19 22:27:16.232536 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232547 | orchestrator | 2025-05-19 22:27:16.232557 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-05-19 22:27:16.232568 | orchestrator | Monday 19 May 2025 22:27:12 +0000 (0:00:00.254) 0:00:10.835 ************ 2025-05-19 22:27:16.232579 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232590 | orchestrator | 2025-05-19 22:27:16.232601 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:27:16.232636 | orchestrator | Monday 19 May 2025 22:27:12 +0000 (0:00:00.258) 0:00:11.093 ************ 2025-05-19 22:27:16.232647 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232658 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.232669 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.232680 | orchestrator | 2025-05-19 22:27:16.232690 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-05-19 22:27:16.232701 | orchestrator | Monday 19 May 2025 22:27:12 +0000 (0:00:00.286) 0:00:11.380 ************ 2025-05-19 22:27:16.232712 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232723 | orchestrator | 2025-05-19 22:27:16.232734 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-05-19 22:27:16.232745 | orchestrator | Monday 19 May 2025 22:27:13 +0000 (0:00:00.754) 0:00:12.135 ************ 2025-05-19 22:27:16.232756 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-19 22:27:16.232767 | orchestrator | 2025-05-19 22:27:16.232778 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-05-19 22:27:16.232788 | orchestrator | Monday 19 May 2025 22:27:15 +0000 (0:00:01.657) 0:00:13.792 ************ 2025-05-19 22:27:16.232799 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232810 | orchestrator | 2025-05-19 22:27:16.232821 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-05-19 22:27:16.232832 | orchestrator | Monday 19 May 2025 22:27:15 +0000 (0:00:00.138) 0:00:13.931 ************ 2025-05-19 22:27:16.232843 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232854 | orchestrator | 2025-05-19 22:27:16.232865 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-05-19 22:27:16.232876 | orchestrator | Monday 19 May 2025 22:27:15 +0000 (0:00:00.247) 0:00:14.178 ************ 2025-05-19 22:27:16.232887 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:16.232898 | orchestrator | 2025-05-19 22:27:16.232908 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-05-19 22:27:16.232920 | orchestrator | Monday 19 May 2025 22:27:15 +0000 (0:00:00.126) 0:00:14.304 ************ 2025-05-19 22:27:16.232930 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232941 | orchestrator | 2025-05-19 22:27:16.232952 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:27:16.232963 | orchestrator | Monday 19 May 2025 22:27:15 +0000 (0:00:00.129) 0:00:14.434 ************ 2025-05-19 22:27:16.232981 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:16.232992 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:16.233003 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:16.233013 | orchestrator | 2025-05-19 22:27:16.233025 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-05-19 22:27:16.233042 | orchestrator | Monday 19 May 2025 22:27:16 +0000 (0:00:00.314) 0:00:14.749 ************ 2025-05-19 22:27:28.841796 | orchestrator | changed: [testbed-node-3] 2025-05-19 22:27:28.841945 | orchestrator | changed: [testbed-node-4] 2025-05-19 22:27:28.841988 | orchestrator | changed: [testbed-node-5] 2025-05-19 22:27:28.842010 | orchestrator | 2025-05-19 22:27:28.842072 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-05-19 22:27:28.842084 | orchestrator | Monday 19 May 2025 22:27:18 +0000 (0:00:02.606) 0:00:17.355 ************ 2025-05-19 22:27:28.842094 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842105 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842115 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842125 | orchestrator | 2025-05-19 22:27:28.842135 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-05-19 22:27:28.842145 | orchestrator | Monday 19 May 2025 22:27:19 +0000 (0:00:00.347) 0:00:17.702 ************ 2025-05-19 22:27:28.842154 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842165 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842175 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842184 | orchestrator | 2025-05-19 22:27:28.842194 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-05-19 22:27:28.842204 | orchestrator | Monday 19 May 2025 22:27:19 +0000 (0:00:00.396) 0:00:18.098 ************ 2025-05-19 22:27:28.842214 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:28.842223 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:28.842233 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:28.842242 | orchestrator | 2025-05-19 22:27:28.842252 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-05-19 22:27:28.842264 | orchestrator | Monday 19 May 2025 22:27:19 +0000 (0:00:00.296) 0:00:18.395 ************ 2025-05-19 22:27:28.842274 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842285 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842296 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842306 | orchestrator | 2025-05-19 22:27:28.842317 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-05-19 22:27:28.842328 | orchestrator | Monday 19 May 2025 22:27:20 +0000 (0:00:00.579) 0:00:18.974 ************ 2025-05-19 22:27:28.842338 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:28.842349 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:28.842360 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:28.842371 | orchestrator | 2025-05-19 22:27:28.842383 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-05-19 22:27:28.842442 | orchestrator | Monday 19 May 2025 22:27:20 +0000 (0:00:00.308) 0:00:19.282 ************ 2025-05-19 22:27:28.842455 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:28.842466 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:28.842477 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:28.842488 | orchestrator | 2025-05-19 22:27:28.842498 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-05-19 22:27:28.842509 | orchestrator | Monday 19 May 2025 22:27:21 +0000 (0:00:00.298) 0:00:19.581 ************ 2025-05-19 22:27:28.842520 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842532 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842543 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842554 | orchestrator | 2025-05-19 22:27:28.842565 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-05-19 22:27:28.842575 | orchestrator | Monday 19 May 2025 22:27:21 +0000 (0:00:00.428) 0:00:20.009 ************ 2025-05-19 22:27:28.842584 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842594 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842651 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842662 | orchestrator | 2025-05-19 22:27:28.842672 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-05-19 22:27:28.842682 | orchestrator | Monday 19 May 2025 22:27:22 +0000 (0:00:00.706) 0:00:20.716 ************ 2025-05-19 22:27:28.842691 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842701 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842710 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842720 | orchestrator | 2025-05-19 22:27:28.842729 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-05-19 22:27:28.842739 | orchestrator | Monday 19 May 2025 22:27:22 +0000 (0:00:00.296) 0:00:21.012 ************ 2025-05-19 22:27:28.842749 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:28.842759 | orchestrator | skipping: [testbed-node-4] 2025-05-19 22:27:28.842768 | orchestrator | skipping: [testbed-node-5] 2025-05-19 22:27:28.842778 | orchestrator | 2025-05-19 22:27:28.842788 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-05-19 22:27:28.842798 | orchestrator | Monday 19 May 2025 22:27:22 +0000 (0:00:00.310) 0:00:21.323 ************ 2025-05-19 22:27:28.842807 | orchestrator | ok: [testbed-node-3] 2025-05-19 22:27:28.842817 | orchestrator | ok: [testbed-node-4] 2025-05-19 22:27:28.842826 | orchestrator | ok: [testbed-node-5] 2025-05-19 22:27:28.842836 | orchestrator | 2025-05-19 22:27:28.842846 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-05-19 22:27:28.842855 | orchestrator | Monday 19 May 2025 22:27:23 +0000 (0:00:00.548) 0:00:21.871 ************ 2025-05-19 22:27:28.842865 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:28.842875 | orchestrator | 2025-05-19 22:27:28.842890 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-05-19 22:27:28.842902 | orchestrator | Monday 19 May 2025 22:27:23 +0000 (0:00:00.280) 0:00:22.152 ************ 2025-05-19 22:27:28.842918 | orchestrator | skipping: [testbed-node-3] 2025-05-19 22:27:28.842934 | orchestrator | 2025-05-19 22:27:28.842950 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-05-19 22:27:28.842966 | orchestrator | Monday 19 May 2025 22:27:23 +0000 (0:00:00.317) 0:00:22.469 ************ 2025-05-19 22:27:28.842982 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:28.842999 | orchestrator | 2025-05-19 22:27:28.843014 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-05-19 22:27:28.843031 | orchestrator | Monday 19 May 2025 22:27:25 +0000 (0:00:01.747) 0:00:24.216 ************ 2025-05-19 22:27:28.843048 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:28.843064 | orchestrator | 2025-05-19 22:27:28.843077 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-05-19 22:27:28.843087 | orchestrator | Monday 19 May 2025 22:27:25 +0000 (0:00:00.253) 0:00:24.470 ************ 2025-05-19 22:27:28.843116 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:28.843126 | orchestrator | 2025-05-19 22:27:28.843135 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:28.843145 | orchestrator | Monday 19 May 2025 22:27:26 +0000 (0:00:00.307) 0:00:24.778 ************ 2025-05-19 22:27:28.843154 | orchestrator | 2025-05-19 22:27:28.843163 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:28.843173 | orchestrator | Monday 19 May 2025 22:27:26 +0000 (0:00:00.068) 0:00:24.846 ************ 2025-05-19 22:27:28.843182 | orchestrator | 2025-05-19 22:27:28.843192 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-05-19 22:27:28.843201 | orchestrator | Monday 19 May 2025 22:27:26 +0000 (0:00:00.068) 0:00:24.915 ************ 2025-05-19 22:27:28.843211 | orchestrator | 2025-05-19 22:27:28.843220 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-05-19 22:27:28.843230 | orchestrator | Monday 19 May 2025 22:27:26 +0000 (0:00:00.072) 0:00:24.987 ************ 2025-05-19 22:27:28.843251 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-19 22:27:28.843260 | orchestrator | 2025-05-19 22:27:28.843270 | orchestrator | TASK [Print report file information] ******************************************* 2025-05-19 22:27:28.843279 | orchestrator | Monday 19 May 2025 22:27:27 +0000 (0:00:01.340) 0:00:26.327 ************ 2025-05-19 22:27:28.843289 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-05-19 22:27:28.843298 | orchestrator |  "msg": [ 2025-05-19 22:27:28.843308 | orchestrator |  "Validator run completed.", 2025-05-19 22:27:28.843318 | orchestrator |  "You can find the report file here:", 2025-05-19 22:27:28.843327 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-05-19T22:27:02+00:00-report.json", 2025-05-19 22:27:28.843338 | orchestrator |  "on the following host:", 2025-05-19 22:27:28.843347 | orchestrator |  "testbed-manager" 2025-05-19 22:27:28.843357 | orchestrator |  ] 2025-05-19 22:27:28.843367 | orchestrator | } 2025-05-19 22:27:28.843377 | orchestrator | 2025-05-19 22:27:28.843386 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:27:28.843397 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-05-19 22:27:28.843408 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 22:27:28.843418 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-19 22:27:28.843427 | orchestrator | 2025-05-19 22:27:28.843436 | orchestrator | 2025-05-19 22:27:28.843446 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:27:28.843455 | orchestrator | Monday 19 May 2025 22:27:28 +0000 (0:00:00.624) 0:00:26.953 ************ 2025-05-19 22:27:28.843465 | orchestrator | =============================================================================== 2025-05-19 22:27:28.843474 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.61s 2025-05-19 22:27:28.843484 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2025-05-19 22:27:28.843493 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.66s 2025-05-19 22:27:28.843503 | orchestrator | Write report file ------------------------------------------------------- 1.34s 2025-05-19 22:27:28.843512 | orchestrator | Create report output directory ------------------------------------------ 1.06s 2025-05-19 22:27:28.843522 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2025-05-19 22:27:28.843531 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.75s 2025-05-19 22:27:28.843541 | orchestrator | Get timestamp for report file ------------------------------------------- 0.71s 2025-05-19 22:27:28.843550 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.71s 2025-05-19 22:27:28.843560 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.63s 2025-05-19 22:27:28.843569 | orchestrator | Print report file information ------------------------------------------- 0.63s 2025-05-19 22:27:28.843579 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.59s 2025-05-19 22:27:28.843588 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.58s 2025-05-19 22:27:28.843598 | orchestrator | Prepare test data ------------------------------------------------------- 0.57s 2025-05-19 22:27:28.843690 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.56s 2025-05-19 22:27:28.843700 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.55s 2025-05-19 22:27:28.843710 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.55s 2025-05-19 22:27:28.843719 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.45s 2025-05-19 22:27:28.843736 | orchestrator | Prepare test data ------------------------------------------------------- 0.43s 2025-05-19 22:27:28.843746 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.40s 2025-05-19 22:27:29.154913 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-05-19 22:27:29.161936 | orchestrator | + set -e 2025-05-19 22:27:29.162104 | orchestrator | + source /opt/manager-vars.sh 2025-05-19 22:27:29.162129 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-19 22:27:29.162149 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-19 22:27:29.162168 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-19 22:27:29.162187 | orchestrator | ++ CEPH_VERSION=reef 2025-05-19 22:27:29.162206 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-19 22:27:29.162226 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-19 22:27:29.162245 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 22:27:29.162263 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 22:27:29.162282 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-19 22:27:29.162300 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-19 22:27:29.162318 | orchestrator | ++ export ARA=false 2025-05-19 22:27:29.162337 | orchestrator | ++ ARA=false 2025-05-19 22:27:29.162355 | orchestrator | ++ export TEMPEST=false 2025-05-19 22:27:29.162373 | orchestrator | ++ TEMPEST=false 2025-05-19 22:27:29.162391 | orchestrator | ++ export IS_ZUUL=true 2025-05-19 22:27:29.162409 | orchestrator | ++ IS_ZUUL=true 2025-05-19 22:27:29.162428 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 22:27:29.162447 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.197 2025-05-19 22:27:29.162465 | orchestrator | ++ export EXTERNAL_API=false 2025-05-19 22:27:29.162483 | orchestrator | ++ EXTERNAL_API=false 2025-05-19 22:27:29.162503 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-19 22:27:29.162522 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-19 22:27:29.162542 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-19 22:27:29.162562 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-19 22:27:29.162581 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-19 22:27:29.162626 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-19 22:27:29.162647 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-19 22:27:29.162665 | orchestrator | + source /etc/os-release 2025-05-19 22:27:29.162684 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-05-19 22:27:29.162704 | orchestrator | ++ NAME=Ubuntu 2025-05-19 22:27:29.162722 | orchestrator | ++ VERSION_ID=24.04 2025-05-19 22:27:29.162742 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-05-19 22:27:29.162760 | orchestrator | ++ VERSION_CODENAME=noble 2025-05-19 22:27:29.162780 | orchestrator | ++ ID=ubuntu 2025-05-19 22:27:29.162799 | orchestrator | ++ ID_LIKE=debian 2025-05-19 22:27:29.162819 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-05-19 22:27:29.162838 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-05-19 22:27:29.162858 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-05-19 22:27:29.162876 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-05-19 22:27:29.162896 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-05-19 22:27:29.162914 | orchestrator | ++ LOGO=ubuntu-logo 2025-05-19 22:27:29.162932 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-05-19 22:27:29.162951 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-05-19 22:27:29.162971 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-19 22:27:29.183956 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-05-19 22:27:50.458797 | orchestrator | 2025-05-19 22:27:50.458923 | orchestrator | # Status of Elasticsearch 2025-05-19 22:27:50.458949 | orchestrator | 2025-05-19 22:27:50.458969 | orchestrator | + pushd /opt/configuration/contrib 2025-05-19 22:27:50.458982 | orchestrator | + echo 2025-05-19 22:27:50.458994 | orchestrator | + echo '# Status of Elasticsearch' 2025-05-19 22:27:50.459005 | orchestrator | + echo 2025-05-19 22:27:50.459016 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-05-19 22:27:50.637135 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-05-19 22:27:50.637241 | orchestrator | 2025-05-19 22:27:50.637258 | orchestrator | # Status of MariaDB 2025-05-19 22:27:50.637295 | orchestrator | 2025-05-19 22:27:50.637307 | orchestrator | + echo 2025-05-19 22:27:50.637318 | orchestrator | + echo '# Status of MariaDB' 2025-05-19 22:27:50.637329 | orchestrator | + echo 2025-05-19 22:27:50.637339 | orchestrator | + MARIADB_USER=root_shard_0 2025-05-19 22:27:50.637351 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-05-19 22:27:50.698407 | orchestrator | Reading package lists... 2025-05-19 22:27:51.071009 | orchestrator | Building dependency tree... 2025-05-19 22:27:51.071627 | orchestrator | Reading state information... 2025-05-19 22:27:51.494373 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-05-19 22:27:51.494487 | orchestrator | bc set to manually installed. 2025-05-19 22:27:51.494504 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-05-19 22:27:52.161069 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-05-19 22:27:52.161177 | orchestrator | 2025-05-19 22:27:52.161193 | orchestrator | # Status of Prometheus 2025-05-19 22:27:52.161206 | orchestrator | 2025-05-19 22:27:52.161217 | orchestrator | + echo 2025-05-19 22:27:52.161229 | orchestrator | + echo '# Status of Prometheus' 2025-05-19 22:27:52.161240 | orchestrator | + echo 2025-05-19 22:27:52.161252 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-05-19 22:27:52.241925 | orchestrator | Unauthorized 2025-05-19 22:27:52.245431 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-05-19 22:27:52.296163 | orchestrator | Unauthorized 2025-05-19 22:27:52.299892 | orchestrator | 2025-05-19 22:27:52.299949 | orchestrator | # Status of RabbitMQ 2025-05-19 22:27:52.299971 | orchestrator | 2025-05-19 22:27:52.299990 | orchestrator | + echo 2025-05-19 22:27:52.300001 | orchestrator | + echo '# Status of RabbitMQ' 2025-05-19 22:27:52.300013 | orchestrator | + echo 2025-05-19 22:27:52.300025 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-05-19 22:27:52.831503 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-05-19 22:27:52.847510 | orchestrator | 2025-05-19 22:27:52.847667 | orchestrator | # Status of Redis 2025-05-19 22:27:52.847696 | orchestrator | 2025-05-19 22:27:52.847717 | orchestrator | + echo 2025-05-19 22:27:52.847735 | orchestrator | + echo '# Status of Redis' 2025-05-19 22:27:52.847755 | orchestrator | + echo 2025-05-19 22:27:52.847768 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-05-19 22:27:52.853187 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002329s;;;0.000000;10.000000 2025-05-19 22:27:52.853523 | orchestrator | + popd 2025-05-19 22:27:52.853568 | orchestrator | 2025-05-19 22:27:52.853615 | orchestrator | # Create backup of MariaDB database 2025-05-19 22:27:52.853634 | orchestrator | 2025-05-19 22:27:52.853652 | orchestrator | + echo 2025-05-19 22:27:52.853671 | orchestrator | + echo '# Create backup of MariaDB database' 2025-05-19 22:27:52.853688 | orchestrator | + echo 2025-05-19 22:27:52.853828 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-05-19 22:27:54.752574 | orchestrator | 2025-05-19 22:27:54 | INFO  | Task 722749de-e21a-49c2-a57b-460d1266ef75 (mariadb_backup) was prepared for execution. 2025-05-19 22:27:54.752696 | orchestrator | 2025-05-19 22:27:54 | INFO  | It takes a moment until task 722749de-e21a-49c2-a57b-460d1266ef75 (mariadb_backup) has been started and output is visible here. 2025-05-19 22:27:58.995032 | orchestrator | 2025-05-19 22:27:58.995311 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:27:58.998460 | orchestrator | 2025-05-19 22:27:58.998562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:27:59.000081 | orchestrator | Monday 19 May 2025 22:27:58 +0000 (0:00:00.206) 0:00:00.206 ************ 2025-05-19 22:27:59.210427 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:27:59.331131 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:27:59.332440 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:27:59.339016 | orchestrator | 2025-05-19 22:27:59.339175 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:27:59.340857 | orchestrator | Monday 19 May 2025 22:27:59 +0000 (0:00:00.339) 0:00:00.545 ************ 2025-05-19 22:27:59.975061 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 22:27:59.975886 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 22:27:59.979045 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 22:27:59.979104 | orchestrator | 2025-05-19 22:27:59.979120 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 22:27:59.979134 | orchestrator | 2025-05-19 22:27:59.979245 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 22:27:59.980447 | orchestrator | Monday 19 May 2025 22:27:59 +0000 (0:00:00.643) 0:00:01.189 ************ 2025-05-19 22:28:00.449249 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:28:00.450470 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 22:28:00.451947 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 22:28:00.452211 | orchestrator | 2025-05-19 22:28:00.454790 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:28:00.458570 | orchestrator | Monday 19 May 2025 22:28:00 +0000 (0:00:00.472) 0:00:01.661 ************ 2025-05-19 22:28:01.123976 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:28:01.125268 | orchestrator | 2025-05-19 22:28:01.130723 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-19 22:28:01.132231 | orchestrator | Monday 19 May 2025 22:28:01 +0000 (0:00:00.670) 0:00:02.332 ************ 2025-05-19 22:28:04.790973 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:28:04.791979 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:28:04.793417 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:28:04.794816 | orchestrator | 2025-05-19 22:28:04.795303 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-05-19 22:28:04.796066 | orchestrator | Monday 19 May 2025 22:28:04 +0000 (0:00:03.666) 0:00:05.998 ************ 2025-05-19 22:28:22.317273 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 22:28:22.317390 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-19 22:28:22.322157 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 22:28:22.322740 | orchestrator | mariadb_bootstrap_restart 2025-05-19 22:28:22.394560 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:22.394862 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:22.395964 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:28:22.397084 | orchestrator | 2025-05-19 22:28:22.399953 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 22:28:22.400769 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:22.401131 | orchestrator | 2025-05-19 22:28:22.401961 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 22:28:22.402918 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:22.404510 | orchestrator | 2025-05-19 22:28:22.410541 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 22:28:22.410659 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:22.410676 | orchestrator | 2025-05-19 22:28:22.410689 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 22:28:22.410700 | orchestrator | 2025-05-19 22:28:22.411701 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 22:28:22.413415 | orchestrator | Monday 19 May 2025 22:28:22 +0000 (0:00:17.611) 0:00:23.610 ************ 2025-05-19 22:28:22.613114 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:28:22.748363 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:22.749185 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:22.753692 | orchestrator | 2025-05-19 22:28:22.754430 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 22:28:22.755163 | orchestrator | Monday 19 May 2025 22:28:22 +0000 (0:00:00.351) 0:00:23.962 ************ 2025-05-19 22:28:23.209959 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:28:23.252875 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:23.254986 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:23.256825 | orchestrator | 2025-05-19 22:28:23.258386 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:28:23.259153 | orchestrator | 2025-05-19 22:28:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 22:28:23.259596 | orchestrator | 2025-05-19 22:28:23 | INFO  | Please wait and do not abort execution. 2025-05-19 22:28:23.261536 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:28:23.262219 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:28:23.262962 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:28:23.263601 | orchestrator | 2025-05-19 22:28:23.264488 | orchestrator | 2025-05-19 22:28:23.266468 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:28:23.267461 | orchestrator | Monday 19 May 2025 22:28:23 +0000 (0:00:00.506) 0:00:24.468 ************ 2025-05-19 22:28:23.268296 | orchestrator | =============================================================================== 2025-05-19 22:28:23.269287 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.61s 2025-05-19 22:28:23.270144 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.67s 2025-05-19 22:28:23.270845 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.67s 2025-05-19 22:28:23.271526 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2025-05-19 22:28:23.272059 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.51s 2025-05-19 22:28:23.272723 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.47s 2025-05-19 22:28:23.274978 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.35s 2025-05-19 22:28:23.275016 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-05-19 22:28:24.045952 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=incremental 2025-05-19 22:28:26.056289 | orchestrator | 2025-05-19 22:28:26 | INFO  | Task 2f91e1e5-8663-4bbc-9320-ffedd9b3fef4 (mariadb_backup) was prepared for execution. 2025-05-19 22:28:26.056436 | orchestrator | 2025-05-19 22:28:26 | INFO  | It takes a moment until task 2f91e1e5-8663-4bbc-9320-ffedd9b3fef4 (mariadb_backup) has been started and output is visible here. 2025-05-19 22:28:30.367296 | orchestrator | 2025-05-19 22:28:30.370407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-19 22:28:30.372357 | orchestrator | 2025-05-19 22:28:30.373558 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-19 22:28:30.374095 | orchestrator | Monday 19 May 2025 22:28:30 +0000 (0:00:00.210) 0:00:00.210 ************ 2025-05-19 22:28:30.583866 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:28:30.710791 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:28:30.711208 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:28:30.712066 | orchestrator | 2025-05-19 22:28:30.715834 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-19 22:28:30.715876 | orchestrator | Monday 19 May 2025 22:28:30 +0000 (0:00:00.347) 0:00:00.558 ************ 2025-05-19 22:28:31.336210 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-19 22:28:31.337185 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-19 22:28:31.338436 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-19 22:28:31.340537 | orchestrator | 2025-05-19 22:28:31.341244 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-19 22:28:31.341811 | orchestrator | 2025-05-19 22:28:31.342993 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-19 22:28:31.343254 | orchestrator | Monday 19 May 2025 22:28:31 +0000 (0:00:00.625) 0:00:01.183 ************ 2025-05-19 22:28:31.764395 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-19 22:28:31.765314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-19 22:28:31.766424 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-19 22:28:31.767297 | orchestrator | 2025-05-19 22:28:31.770106 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-19 22:28:31.770704 | orchestrator | Monday 19 May 2025 22:28:31 +0000 (0:00:00.426) 0:00:01.610 ************ 2025-05-19 22:28:32.329266 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-19 22:28:32.335243 | orchestrator | 2025-05-19 22:28:32.335453 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-05-19 22:28:32.336738 | orchestrator | Monday 19 May 2025 22:28:32 +0000 (0:00:00.564) 0:00:02.174 ************ 2025-05-19 22:28:35.790746 | orchestrator | ok: [testbed-node-0] 2025-05-19 22:28:35.795355 | orchestrator | ok: [testbed-node-2] 2025-05-19 22:28:35.795467 | orchestrator | ok: [testbed-node-1] 2025-05-19 22:28:35.795485 | orchestrator | 2025-05-19 22:28:35.796268 | orchestrator | TASK [mariadb : Taking incremental database backup via Mariabackup] ************ 2025-05-19 22:28:35.797076 | orchestrator | Monday 19 May 2025 22:28:35 +0000 (0:00:03.459) 0:00:05.634 ************ 2025-05-19 22:28:53.104474 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-19 22:28:53.104770 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-05-19 22:28:53.105714 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-19 22:28:53.107027 | orchestrator | mariadb_bootstrap_restart 2025-05-19 22:28:53.197961 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:53.198372 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:53.199526 | orchestrator | changed: [testbed-node-0] 2025-05-19 22:28:53.200633 | orchestrator | 2025-05-19 22:28:53.206905 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-19 22:28:53.206955 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:53.206967 | orchestrator | 2025-05-19 22:28:53.206979 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-19 22:28:53.207778 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:53.207982 | orchestrator | 2025-05-19 22:28:53.208725 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-19 22:28:53.209182 | orchestrator | skipping: no hosts matched 2025-05-19 22:28:53.209475 | orchestrator | 2025-05-19 22:28:53.212886 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-19 22:28:53.213610 | orchestrator | 2025-05-19 22:28:53.213928 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-19 22:28:53.214387 | orchestrator | Monday 19 May 2025 22:28:53 +0000 (0:00:17.411) 0:00:23.046 ************ 2025-05-19 22:28:53.400296 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:28:53.538340 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:53.538468 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:53.542206 | orchestrator | 2025-05-19 22:28:53.542248 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-19 22:28:53.542264 | orchestrator | Monday 19 May 2025 22:28:53 +0000 (0:00:00.336) 0:00:23.382 ************ 2025-05-19 22:28:53.988068 | orchestrator | skipping: [testbed-node-0] 2025-05-19 22:28:54.039325 | orchestrator | skipping: [testbed-node-1] 2025-05-19 22:28:54.039710 | orchestrator | skipping: [testbed-node-2] 2025-05-19 22:28:54.040543 | orchestrator | 2025-05-19 22:28:54.041695 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:28:54.041755 | orchestrator | 2025-05-19 22:28:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 22:28:54.041939 | orchestrator | 2025-05-19 22:28:54 | INFO  | Please wait and do not abort execution. 2025-05-19 22:28:54.042533 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-19 22:28:54.043326 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:28:54.043736 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-19 22:28:54.044361 | orchestrator | 2025-05-19 22:28:54.044751 | orchestrator | 2025-05-19 22:28:54.045701 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:28:54.045885 | orchestrator | Monday 19 May 2025 22:28:54 +0000 (0:00:00.501) 0:00:23.884 ************ 2025-05-19 22:28:54.046381 | orchestrator | =============================================================================== 2025-05-19 22:28:54.046817 | orchestrator | mariadb : Taking incremental database backup via Mariabackup ----------- 17.41s 2025-05-19 22:28:54.047515 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.46s 2025-05-19 22:28:54.047898 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-05-19 22:28:54.048636 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-05-19 22:28:54.049004 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.50s 2025-05-19 22:28:54.049539 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-05-19 22:28:54.050833 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2025-05-19 22:28:54.051177 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2025-05-19 22:28:54.728040 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-05-19 22:28:54.735345 | orchestrator | + set -e 2025-05-19 22:28:54.735471 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-19 22:28:54.735498 | orchestrator | ++ export INTERACTIVE=false 2025-05-19 22:28:54.735517 | orchestrator | ++ INTERACTIVE=false 2025-05-19 22:28:54.735529 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-19 22:28:54.735540 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-19 22:28:54.735601 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-05-19 22:28:54.736621 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-05-19 22:28:54.744605 | orchestrator | 2025-05-19 22:28:54.744707 | orchestrator | # OpenStack endpoints 2025-05-19 22:28:54.744724 | orchestrator | 2025-05-19 22:28:54.744735 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-19 22:28:54.744747 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-19 22:28:54.744758 | orchestrator | + export OS_CLOUD=admin 2025-05-19 22:28:54.744769 | orchestrator | + OS_CLOUD=admin 2025-05-19 22:28:54.744780 | orchestrator | + echo 2025-05-19 22:28:54.744791 | orchestrator | + echo '# OpenStack endpoints' 2025-05-19 22:28:54.744802 | orchestrator | + echo 2025-05-19 22:28:54.744813 | orchestrator | + openstack endpoint list 2025-05-19 22:28:58.157154 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 22:28:58.157274 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-05-19 22:28:58.157283 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 22:28:58.157290 | orchestrator | | 025f3a069b5a4909a1928bac5e6c1c25 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-05-19 22:28:58.157296 | orchestrator | | 03a92bc9dfd44f24b93cfc0e469fe376 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-05-19 22:28:58.157318 | orchestrator | | 205dfebbd6e1424a93c5721ecfc7fd64 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-05-19 22:28:58.157324 | orchestrator | | 2133b995c0dd48a68906e843a05ca76b | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-05-19 22:28:58.157340 | orchestrator | | 238c40fd9e224d7489d1dd0f074d30bf | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-05-19 22:28:58.157353 | orchestrator | | 24f0f70c4fbb476488f87ee81b9273ea | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-05-19 22:28:58.157358 | orchestrator | | 281d57024569445cadbdbb5fcbb23358 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-05-19 22:28:58.157364 | orchestrator | | 3c485f04941747d0a2bd2f650f67c520 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-05-19 22:28:58.157369 | orchestrator | | 3f08cdd652c54b8daaf09ca807fdfb6a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-05-19 22:28:58.157374 | orchestrator | | 448d40b4eb9040f3ba0ca09786626e46 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-05-19 22:28:58.157379 | orchestrator | | 4d72a1e11a5441a889c08207047583a5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-05-19 22:28:58.157385 | orchestrator | | 5a6a5a648cfd461599a2b4786e61fbf5 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-05-19 22:28:58.157390 | orchestrator | | 618a229462d042fda6d86d4865a5c7ef | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-05-19 22:28:58.157395 | orchestrator | | 6db24340a42b402c8b72e4d170f44c18 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-05-19 22:28:58.157401 | orchestrator | | 7276f1413f8b4e4ebaed6ead63d83a99 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-05-19 22:28:58.157406 | orchestrator | | 7ac3aa8b727f46a39c7da155297e15c6 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-05-19 22:28:58.157411 | orchestrator | | 7e32a8e1cd854e858e16fea47f5f9946 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-05-19 22:28:58.157416 | orchestrator | | 8d9d46d516d4499d9adb593fb5ce79fe | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-05-19 22:28:58.157421 | orchestrator | | 92702d032ba44e74b6eb58fd7e2200fb | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-05-19 22:28:58.157426 | orchestrator | | 994fc48f72a54770a5135e98a20e8ce3 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-05-19 22:28:58.157445 | orchestrator | | ae898b499c814af690dbf651e183391d | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-05-19 22:28:58.157451 | orchestrator | | c6682f2e4eb04616b17ffd46dc6e7c6d | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-05-19 22:28:58.157460 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-05-19 22:28:58.541744 | orchestrator | 2025-05-19 22:28:58.541857 | orchestrator | # Cinder 2025-05-19 22:28:58.541873 | orchestrator | 2025-05-19 22:28:58.541885 | orchestrator | + echo 2025-05-19 22:28:58.541897 | orchestrator | + echo '# Cinder' 2025-05-19 22:28:58.541909 | orchestrator | + echo 2025-05-19 22:28:58.541920 | orchestrator | + openstack volume service list 2025-05-19 22:29:01.511937 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:01.512057 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-05-19 22:29:01.512074 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:01.512086 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-05-19T22:28:59.000000 | 2025-05-19 22:29:01.512098 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-05-19T22:28:51.000000 | 2025-05-19 22:29:01.512110 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-05-19T22:28:52.000000 | 2025-05-19 22:29:01.512140 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-05-19T22:28:58.000000 | 2025-05-19 22:29:01.512153 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-05-19T22:28:58.000000 | 2025-05-19 22:29:01.512165 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-05-19T22:28:59.000000 | 2025-05-19 22:29:01.512176 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-05-19T22:28:59.000000 | 2025-05-19 22:29:01.512188 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-05-19T22:28:59.000000 | 2025-05-19 22:29:01.512199 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-05-19T22:29:00.000000 | 2025-05-19 22:29:01.512210 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:01.899810 | orchestrator | 2025-05-19 22:29:01.899920 | orchestrator | # Neutron 2025-05-19 22:29:01.899937 | orchestrator | 2025-05-19 22:29:01.899950 | orchestrator | + echo 2025-05-19 22:29:01.899961 | orchestrator | + echo '# Neutron' 2025-05-19 22:29:01.899974 | orchestrator | + echo 2025-05-19 22:29:01.899985 | orchestrator | + openstack network agent list 2025-05-19 22:29:04.844191 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 22:29:04.844331 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-05-19 22:29:04.844356 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 22:29:04.844378 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844397 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844416 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844436 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844452 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844463 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-05-19 22:29:04.844507 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 22:29:04.844519 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 22:29:04.844530 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-05-19 22:29:04.844541 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-05-19 22:29:05.249950 | orchestrator | + openstack network service provider list 2025-05-19 22:29:07.969475 | orchestrator | +---------------+------+---------+ 2025-05-19 22:29:07.969638 | orchestrator | | Service Type | Name | Default | 2025-05-19 22:29:07.969655 | orchestrator | +---------------+------+---------+ 2025-05-19 22:29:07.969666 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-05-19 22:29:07.969678 | orchestrator | +---------------+------+---------+ 2025-05-19 22:29:08.349139 | orchestrator | 2025-05-19 22:29:08.349260 | orchestrator | # Nova 2025-05-19 22:29:08.349284 | orchestrator | 2025-05-19 22:29:08.349305 | orchestrator | + echo 2025-05-19 22:29:08.349322 | orchestrator | + echo '# Nova' 2025-05-19 22:29:08.349341 | orchestrator | + echo 2025-05-19 22:29:08.349360 | orchestrator | + openstack compute service list 2025-05-19 22:29:11.250677 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:11.250820 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-05-19 22:29:11.250838 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:11.250850 | orchestrator | | bd65368b-31ee-4734-b302-4c2893a143ad | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-05-19T22:29:09.000000 | 2025-05-19 22:29:11.250862 | orchestrator | | e6760e84-69d6-4557-b808-f112c8d98c38 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-05-19T22:29:03.000000 | 2025-05-19 22:29:11.250873 | orchestrator | | 8c15a988-89ca-4ead-954c-c09ea90276ce | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-05-19T22:29:02.000000 | 2025-05-19 22:29:11.250884 | orchestrator | | fb867d43-1f0e-4efe-bccd-2e34db212cb9 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-05-19T22:29:04.000000 | 2025-05-19 22:29:11.250895 | orchestrator | | 96ad1cca-943c-4003-b74e-fb14dd1c9bcd | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-05-19T22:29:07.000000 | 2025-05-19 22:29:11.250906 | orchestrator | | 2ba7ba22-4cd8-4784-b3e7-fddfdf34d0e5 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-05-19T22:29:01.000000 | 2025-05-19 22:29:11.250917 | orchestrator | | cf0af5ea-285d-484f-9d5f-e648d201f90f | nova-compute | testbed-node-3 | nova | enabled | up | 2025-05-19T22:29:03.000000 | 2025-05-19 22:29:11.250928 | orchestrator | | fd3bf71a-c276-4bd0-8350-a95be63e8101 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-05-19T22:29:04.000000 | 2025-05-19 22:29:11.250939 | orchestrator | | e403ab8c-9e52-4e63-9575-af394ea037be | nova-compute | testbed-node-4 | nova | enabled | up | 2025-05-19T22:29:04.000000 | 2025-05-19 22:29:11.250950 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-05-19 22:29:11.623783 | orchestrator | + openstack hypervisor list 2025-05-19 22:29:16.655065 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 22:29:16.655177 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-05-19 22:29:16.655193 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 22:29:16.655233 | orchestrator | | e4b014d9-6694-4f56-8b5c-ba493b33026c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-05-19 22:29:16.655245 | orchestrator | | 78c9a1da-891d-4993-af48-e92ea3edd654 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-05-19 22:29:16.655256 | orchestrator | | 2602b21e-c8a2-444c-afae-8c9ad799a34b | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-05-19 22:29:16.655267 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-05-19 22:29:16.992115 | orchestrator | 2025-05-19 22:29:16.992225 | orchestrator | # Run OpenStack test play 2025-05-19 22:29:16.992241 | orchestrator | 2025-05-19 22:29:16.992253 | orchestrator | + echo 2025-05-19 22:29:16.992265 | orchestrator | + echo '# Run OpenStack test play' 2025-05-19 22:29:16.992277 | orchestrator | + echo 2025-05-19 22:29:16.992288 | orchestrator | + osism apply --environment openstack test 2025-05-19 22:29:18.823685 | orchestrator | 2025-05-19 22:29:18 | INFO  | Trying to run play test in environment openstack 2025-05-19 22:29:18.889373 | orchestrator | 2025-05-19 22:29:18 | INFO  | Task ddf2549c-b320-40f9-bdf1-6eaf15ccf402 (test) was prepared for execution. 2025-05-19 22:29:18.889479 | orchestrator | 2025-05-19 22:29:18 | INFO  | It takes a moment until task ddf2549c-b320-40f9-bdf1-6eaf15ccf402 (test) has been started and output is visible here. 2025-05-19 22:29:23.185058 | orchestrator | 2025-05-19 22:29:23.187332 | orchestrator | PLAY [Create test project] ***************************************************** 2025-05-19 22:29:23.187615 | orchestrator | 2025-05-19 22:29:23.188935 | orchestrator | TASK [Create test domain] ****************************************************** 2025-05-19 22:29:23.190142 | orchestrator | Monday 19 May 2025 22:29:23 +0000 (0:00:00.084) 0:00:00.084 ************ 2025-05-19 22:29:27.149415 | orchestrator | changed: [localhost] 2025-05-19 22:29:27.149608 | orchestrator | 2025-05-19 22:29:27.152173 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-05-19 22:29:27.153022 | orchestrator | Monday 19 May 2025 22:29:27 +0000 (0:00:03.963) 0:00:04.048 ************ 2025-05-19 22:29:31.688447 | orchestrator | changed: [localhost] 2025-05-19 22:29:31.688607 | orchestrator | 2025-05-19 22:29:31.690121 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-05-19 22:29:31.691211 | orchestrator | Monday 19 May 2025 22:29:31 +0000 (0:00:04.540) 0:00:08.589 ************ 2025-05-19 22:29:38.345832 | orchestrator | changed: [localhost] 2025-05-19 22:29:38.347391 | orchestrator | 2025-05-19 22:29:38.349238 | orchestrator | TASK [Create test project] ***************************************************** 2025-05-19 22:29:38.350472 | orchestrator | Monday 19 May 2025 22:29:38 +0000 (0:00:06.657) 0:00:15.246 ************ 2025-05-19 22:29:42.566710 | orchestrator | changed: [localhost] 2025-05-19 22:29:42.567662 | orchestrator | 2025-05-19 22:29:42.570353 | orchestrator | TASK [Create test user] ******************************************************** 2025-05-19 22:29:42.571459 | orchestrator | Monday 19 May 2025 22:29:42 +0000 (0:00:04.220) 0:00:19.467 ************ 2025-05-19 22:29:47.012227 | orchestrator | changed: [localhost] 2025-05-19 22:29:47.013514 | orchestrator | 2025-05-19 22:29:47.014364 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-05-19 22:29:47.015238 | orchestrator | Monday 19 May 2025 22:29:46 +0000 (0:00:04.438) 0:00:23.905 ************ 2025-05-19 22:29:59.924771 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-05-19 22:29:59.924879 | orchestrator | changed: [localhost] => (item=member) 2025-05-19 22:29:59.924894 | orchestrator | changed: [localhost] => (item=creator) 2025-05-19 22:29:59.924904 | orchestrator | 2025-05-19 22:29:59.925642 | orchestrator | TASK [Create test server group] ************************************************ 2025-05-19 22:29:59.926392 | orchestrator | Monday 19 May 2025 22:29:59 +0000 (0:00:12.914) 0:00:36.819 ************ 2025-05-19 22:30:04.634981 | orchestrator | changed: [localhost] 2025-05-19 22:30:04.635831 | orchestrator | 2025-05-19 22:30:04.637702 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-05-19 22:30:04.639309 | orchestrator | Monday 19 May 2025 22:30:04 +0000 (0:00:04.715) 0:00:41.535 ************ 2025-05-19 22:30:09.808457 | orchestrator | changed: [localhost] 2025-05-19 22:30:09.808825 | orchestrator | 2025-05-19 22:30:09.809425 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-05-19 22:30:09.810553 | orchestrator | Monday 19 May 2025 22:30:09 +0000 (0:00:05.174) 0:00:46.709 ************ 2025-05-19 22:30:14.385914 | orchestrator | changed: [localhost] 2025-05-19 22:30:14.386069 | orchestrator | 2025-05-19 22:30:14.386091 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-05-19 22:30:14.387042 | orchestrator | Monday 19 May 2025 22:30:14 +0000 (0:00:04.570) 0:00:51.279 ************ 2025-05-19 22:30:18.464884 | orchestrator | changed: [localhost] 2025-05-19 22:30:18.464993 | orchestrator | 2025-05-19 22:30:18.465011 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-05-19 22:30:18.466260 | orchestrator | Monday 19 May 2025 22:30:18 +0000 (0:00:04.082) 0:00:55.362 ************ 2025-05-19 22:30:22.527869 | orchestrator | changed: [localhost] 2025-05-19 22:30:22.528013 | orchestrator | 2025-05-19 22:30:22.531869 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-05-19 22:30:22.531964 | orchestrator | Monday 19 May 2025 22:30:22 +0000 (0:00:04.062) 0:00:59.425 ************ 2025-05-19 22:30:26.574425 | orchestrator | changed: [localhost] 2025-05-19 22:30:26.574980 | orchestrator | 2025-05-19 22:30:26.577257 | orchestrator | TASK [Create test network topology] ******************************************** 2025-05-19 22:30:26.578725 | orchestrator | Monday 19 May 2025 22:30:26 +0000 (0:00:04.047) 0:01:03.473 ************ 2025-05-19 22:30:40.977115 | orchestrator | changed: [localhost] 2025-05-19 22:30:40.977245 | orchestrator | 2025-05-19 22:30:40.977262 | orchestrator | TASK [Create test instances] *************************************************** 2025-05-19 22:30:40.977276 | orchestrator | Monday 19 May 2025 22:30:40 +0000 (0:00:14.401) 0:01:17.874 ************ 2025-05-19 22:32:53.046175 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 22:32:53.046324 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 22:32:53.046343 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 22:32:53.047880 | orchestrator | 2025-05-19 22:32:53.048631 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-05-19 22:33:23.048704 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 22:33:23.048825 | orchestrator | 2025-05-19 22:33:23.048843 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-05-19 22:33:53.049810 | orchestrator | 2025-05-19 22:33:53.049947 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-05-19 22:33:56.013662 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 22:33:56.014075 | orchestrator | 2025-05-19 22:33:56.016761 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-05-19 22:33:56.017511 | orchestrator | Monday 19 May 2025 22:33:56 +0000 (0:03:15.040) 0:04:32.914 ************ 2025-05-19 22:34:21.442080 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 22:34:21.442202 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 22:34:21.442218 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 22:34:21.443053 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 22:34:21.444611 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 22:34:21.445113 | orchestrator | 2025-05-19 22:34:21.445889 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-05-19 22:34:21.446609 | orchestrator | Monday 19 May 2025 22:34:21 +0000 (0:00:25.423) 0:04:58.338 ************ 2025-05-19 22:34:55.889053 | orchestrator | changed: [localhost] => (item=test) 2025-05-19 22:34:55.889175 | orchestrator | changed: [localhost] => (item=test-1) 2025-05-19 22:34:55.889191 | orchestrator | changed: [localhost] => (item=test-2) 2025-05-19 22:34:55.889203 | orchestrator | changed: [localhost] => (item=test-3) 2025-05-19 22:34:55.889215 | orchestrator | changed: [localhost] => (item=test-4) 2025-05-19 22:34:55.889226 | orchestrator | 2025-05-19 22:34:55.889251 | orchestrator | TASK [Create test volume] ****************************************************** 2025-05-19 22:34:55.889790 | orchestrator | Monday 19 May 2025 22:34:55 +0000 (0:00:34.441) 0:05:32.780 ************ 2025-05-19 22:35:03.821134 | orchestrator | changed: [localhost] 2025-05-19 22:35:03.822278 | orchestrator | 2025-05-19 22:35:03.823800 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-05-19 22:35:03.824859 | orchestrator | Monday 19 May 2025 22:35:03 +0000 (0:00:07.941) 0:05:40.721 ************ 2025-05-19 22:35:17.605370 | orchestrator | changed: [localhost] 2025-05-19 22:35:17.605590 | orchestrator | 2025-05-19 22:35:17.605610 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-05-19 22:35:17.606269 | orchestrator | Monday 19 May 2025 22:35:17 +0000 (0:00:13.783) 0:05:54.505 ************ 2025-05-19 22:35:23.104743 | orchestrator | ok: [localhost] 2025-05-19 22:35:23.104872 | orchestrator | 2025-05-19 22:35:23.105832 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-05-19 22:35:23.106174 | orchestrator | Monday 19 May 2025 22:35:23 +0000 (0:00:05.501) 0:06:00.007 ************ 2025-05-19 22:35:23.146892 | orchestrator | ok: [localhost] => { 2025-05-19 22:35:23.147137 | orchestrator |  "msg": "192.168.112.110" 2025-05-19 22:35:23.147634 | orchestrator | } 2025-05-19 22:35:23.147658 | orchestrator | 2025-05-19 22:35:23.147777 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-19 22:35:23.148752 | orchestrator | 2025-05-19 22:35:23 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-19 22:35:23.148786 | orchestrator | 2025-05-19 22:35:23 | INFO  | Please wait and do not abort execution. 2025-05-19 22:35:23.149662 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-19 22:35:23.150523 | orchestrator | 2025-05-19 22:35:23.150811 | orchestrator | 2025-05-19 22:35:23.152059 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-19 22:35:23.152303 | orchestrator | Monday 19 May 2025 22:35:23 +0000 (0:00:00.041) 0:06:00.049 ************ 2025-05-19 22:35:23.153165 | orchestrator | =============================================================================== 2025-05-19 22:35:23.153384 | orchestrator | Create test instances ------------------------------------------------- 195.04s 2025-05-19 22:35:23.153792 | orchestrator | Add tag to instances --------------------------------------------------- 34.44s 2025-05-19 22:35:23.155656 | orchestrator | Add metadata to instances ---------------------------------------------- 25.42s 2025-05-19 22:35:23.156609 | orchestrator | Create test network topology ------------------------------------------- 14.40s 2025-05-19 22:35:23.158560 | orchestrator | Attach test volume ----------------------------------------------------- 13.78s 2025-05-19 22:35:23.159692 | orchestrator | Add member roles to user test ------------------------------------------ 12.91s 2025-05-19 22:35:23.160533 | orchestrator | Create test volume ------------------------------------------------------ 7.94s 2025-05-19 22:35:23.161281 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.66s 2025-05-19 22:35:23.162133 | orchestrator | Create floating ip address ---------------------------------------------- 5.50s 2025-05-19 22:35:23.162962 | orchestrator | Create ssh security group ----------------------------------------------- 5.17s 2025-05-19 22:35:23.163238 | orchestrator | Create test server group ------------------------------------------------ 4.72s 2025-05-19 22:35:23.163756 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.57s 2025-05-19 22:35:23.164474 | orchestrator | Create test-admin user -------------------------------------------------- 4.54s 2025-05-19 22:35:23.165234 | orchestrator | Create test user -------------------------------------------------------- 4.44s 2025-05-19 22:35:23.166232 | orchestrator | Create test project ----------------------------------------------------- 4.22s 2025-05-19 22:35:23.166274 | orchestrator | Create icmp security group ---------------------------------------------- 4.08s 2025-05-19 22:35:23.167382 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.06s 2025-05-19 22:35:23.167690 | orchestrator | Create test keypair ----------------------------------------------------- 4.05s 2025-05-19 22:35:23.168509 | orchestrator | Create test domain ------------------------------------------------------ 3.96s 2025-05-19 22:35:23.168772 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-05-19 22:35:23.805267 | orchestrator | + server_list 2025-05-19 22:35:23.805407 | orchestrator | + openstack --os-cloud test server list 2025-05-19 22:35:27.527294 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 22:35:27.527446 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-05-19 22:35:27.527461 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 22:35:27.527472 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | auto_allocated_network=10.42.0.20, 192.168.112.199 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 22:35:27.527482 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | auto_allocated_network=10.42.0.7, 192.168.112.134 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 22:35:27.527492 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | auto_allocated_network=10.42.0.23, 192.168.112.169 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 22:35:27.527502 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | auto_allocated_network=10.42.0.27, 192.168.112.200 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 22:35:27.527512 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | auto_allocated_network=10.42.0.54, 192.168.112.110 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-05-19 22:35:27.527522 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-05-19 22:35:27.864424 | orchestrator | + openstack --os-cloud test server show test 2025-05-19 22:35:31.452394 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:31.452511 | orchestrator | | Field | Value | 2025-05-19 22:35:31.452524 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:31.452534 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 22:35:31.452550 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 22:35:31.452560 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 22:35:31.452590 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-05-19 22:35:31.452600 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 22:35:31.452610 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 22:35:31.452619 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 22:35:31.452629 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 22:35:31.452655 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 22:35:31.452664 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 22:35:31.452674 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 22:35:31.452683 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 22:35:31.452693 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 22:35:31.452707 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 22:35:31.452724 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 22:35:31.452734 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T22:31:10.000000 | 2025-05-19 22:35:31.452744 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 22:35:31.452754 | orchestrator | | accessIPv4 | | 2025-05-19 22:35:31.452764 | orchestrator | | accessIPv6 | | 2025-05-19 22:35:31.452775 | orchestrator | | addresses | auto_allocated_network=10.42.0.54, 192.168.112.110 | 2025-05-19 22:35:31.452791 | orchestrator | | config_drive | | 2025-05-19 22:35:31.452801 | orchestrator | | created | 2025-05-19T22:30:49Z | 2025-05-19 22:35:31.452807 | orchestrator | | description | None | 2025-05-19 22:35:31.452813 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 22:35:31.452824 | orchestrator | | hostId | b9ea91a0059d0fc5b392975a9876844b3c4dda301b7905e44821bb5a | 2025-05-19 22:35:31.452831 | orchestrator | | host_status | None | 2025-05-19 22:35:31.452837 | orchestrator | | id | e200a6b5-1e27-43f8-9ce9-08589233be70 | 2025-05-19 22:35:31.452843 | orchestrator | | image | Cirros 0.6.2 (f2f377a2-8def-43b6-b836-9cf16c77b55f) | 2025-05-19 22:35:31.452849 | orchestrator | | key_name | test | 2025-05-19 22:35:31.452855 | orchestrator | | locked | False | 2025-05-19 22:35:31.452862 | orchestrator | | locked_reason | None | 2025-05-19 22:35:31.452868 | orchestrator | | name | test | 2025-05-19 22:35:31.452885 | orchestrator | | pinned_availability_zone | None | 2025-05-19 22:35:31.452892 | orchestrator | | progress | 0 | 2025-05-19 22:35:31.452899 | orchestrator | | project_id | 948cc1526759406f9e14b47278dcf818 | 2025-05-19 22:35:31.452911 | orchestrator | | properties | hostname='test' | 2025-05-19 22:35:31.452921 | orchestrator | | security_groups | name='icmp' | 2025-05-19 22:35:31.452928 | orchestrator | | | name='ssh' | 2025-05-19 22:35:31.452934 | orchestrator | | server_groups | None | 2025-05-19 22:35:31.452941 | orchestrator | | status | ACTIVE | 2025-05-19 22:35:31.452948 | orchestrator | | tags | test | 2025-05-19 22:35:31.452954 | orchestrator | | trusted_image_certificates | None | 2025-05-19 22:35:31.452961 | orchestrator | | updated | 2025-05-19T22:34:00Z | 2025-05-19 22:35:31.452971 | orchestrator | | user_id | cb788911deac4e2c9d768567a3bcf559 | 2025-05-19 22:35:31.452978 | orchestrator | | volumes_attached | delete_on_termination='False', id='474397af-19ec-4704-96b1-9fd3152a2010' | 2025-05-19 22:35:31.457127 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:31.854732 | orchestrator | + openstack --os-cloud test server show test-1 2025-05-19 22:35:35.246686 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:35.246825 | orchestrator | | Field | Value | 2025-05-19 22:35:35.246877 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:35.246901 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 22:35:35.246922 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 22:35:35.246941 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 22:35:35.246961 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-05-19 22:35:35.246979 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 22:35:35.246992 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 22:35:35.247009 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 22:35:35.247120 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 22:35:35.247196 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 22:35:35.247221 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 22:35:35.247251 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 22:35:35.247272 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 22:35:35.247292 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 22:35:35.247351 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 22:35:35.247374 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 22:35:35.247396 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T22:31:51.000000 | 2025-05-19 22:35:35.247438 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 22:35:35.247460 | orchestrator | | accessIPv4 | | 2025-05-19 22:35:35.247485 | orchestrator | | accessIPv6 | | 2025-05-19 22:35:35.247499 | orchestrator | | addresses | auto_allocated_network=10.42.0.27, 192.168.112.200 | 2025-05-19 22:35:35.247525 | orchestrator | | config_drive | | 2025-05-19 22:35:35.247545 | orchestrator | | created | 2025-05-19T22:31:30Z | 2025-05-19 22:35:35.247573 | orchestrator | | description | None | 2025-05-19 22:35:35.247586 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 22:35:35.247597 | orchestrator | | hostId | 05e8fade4e150be110e9ab28cba8f35455166906b396b4cf7257bea3 | 2025-05-19 22:35:35.247608 | orchestrator | | host_status | None | 2025-05-19 22:35:35.247620 | orchestrator | | id | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | 2025-05-19 22:35:35.247631 | orchestrator | | image | Cirros 0.6.2 (f2f377a2-8def-43b6-b836-9cf16c77b55f) | 2025-05-19 22:35:35.247647 | orchestrator | | key_name | test | 2025-05-19 22:35:35.247684 | orchestrator | | locked | False | 2025-05-19 22:35:35.247705 | orchestrator | | locked_reason | None | 2025-05-19 22:35:35.247722 | orchestrator | | name | test-1 | 2025-05-19 22:35:35.247748 | orchestrator | | pinned_availability_zone | None | 2025-05-19 22:35:35.247760 | orchestrator | | progress | 0 | 2025-05-19 22:35:35.247781 | orchestrator | | project_id | 948cc1526759406f9e14b47278dcf818 | 2025-05-19 22:35:35.247802 | orchestrator | | properties | hostname='test-1' | 2025-05-19 22:35:35.247822 | orchestrator | | security_groups | name='icmp' | 2025-05-19 22:35:35.247840 | orchestrator | | | name='ssh' | 2025-05-19 22:35:35.247858 | orchestrator | | server_groups | None | 2025-05-19 22:35:35.247877 | orchestrator | | status | ACTIVE | 2025-05-19 22:35:35.247910 | orchestrator | | tags | test | 2025-05-19 22:35:35.247924 | orchestrator | | trusted_image_certificates | None | 2025-05-19 22:35:35.247935 | orchestrator | | updated | 2025-05-19T22:34:05Z | 2025-05-19 22:35:35.247952 | orchestrator | | user_id | cb788911deac4e2c9d768567a3bcf559 | 2025-05-19 22:35:35.247964 | orchestrator | | volumes_attached | | 2025-05-19 22:35:35.251498 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:35.632758 | orchestrator | + openstack --os-cloud test server show test-2 2025-05-19 22:35:39.043515 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:39.043673 | orchestrator | | Field | Value | 2025-05-19 22:35:39.043696 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:39.043709 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 22:35:39.043742 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 22:35:39.043754 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 22:35:39.043766 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-05-19 22:35:39.043777 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 22:35:39.043788 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 22:35:39.043799 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 22:35:39.043810 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 22:35:39.043848 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 22:35:39.043861 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 22:35:39.043872 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 22:35:39.043884 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 22:35:39.043902 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 22:35:39.043913 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 22:35:39.043925 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 22:35:39.043938 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T22:32:34.000000 | 2025-05-19 22:35:39.043952 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 22:35:39.044000 | orchestrator | | accessIPv4 | | 2025-05-19 22:35:39.044025 | orchestrator | | accessIPv6 | | 2025-05-19 22:35:39.044043 | orchestrator | | addresses | auto_allocated_network=10.42.0.23, 192.168.112.169 | 2025-05-19 22:35:39.044064 | orchestrator | | config_drive | | 2025-05-19 22:35:39.044078 | orchestrator | | created | 2025-05-19T22:32:12Z | 2025-05-19 22:35:39.044091 | orchestrator | | description | None | 2025-05-19 22:35:39.044115 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 22:35:39.044128 | orchestrator | | hostId | 30a77d605a7599a5fcc0a5066397c4f000292ce930a50bf4a6afe256 | 2025-05-19 22:35:39.044140 | orchestrator | | host_status | None | 2025-05-19 22:35:39.044153 | orchestrator | | id | c7b60476-003c-4b32-89e6-cc4b971830f3 | 2025-05-19 22:35:39.044166 | orchestrator | | image | Cirros 0.6.2 (f2f377a2-8def-43b6-b836-9cf16c77b55f) | 2025-05-19 22:35:39.044178 | orchestrator | | key_name | test | 2025-05-19 22:35:39.044191 | orchestrator | | locked | False | 2025-05-19 22:35:39.044204 | orchestrator | | locked_reason | None | 2025-05-19 22:35:39.044222 | orchestrator | | name | test-2 | 2025-05-19 22:35:39.044241 | orchestrator | | pinned_availability_zone | None | 2025-05-19 22:35:39.044255 | orchestrator | | progress | 0 | 2025-05-19 22:35:39.044275 | orchestrator | | project_id | 948cc1526759406f9e14b47278dcf818 | 2025-05-19 22:35:39.044288 | orchestrator | | properties | hostname='test-2' | 2025-05-19 22:35:39.044299 | orchestrator | | security_groups | name='icmp' | 2025-05-19 22:35:39.044337 | orchestrator | | | name='ssh' | 2025-05-19 22:35:39.044349 | orchestrator | | server_groups | None | 2025-05-19 22:35:39.044360 | orchestrator | | status | ACTIVE | 2025-05-19 22:35:39.044371 | orchestrator | | tags | test | 2025-05-19 22:35:39.044383 | orchestrator | | trusted_image_certificates | None | 2025-05-19 22:35:39.044394 | orchestrator | | updated | 2025-05-19T22:34:10Z | 2025-05-19 22:35:39.044417 | orchestrator | | user_id | cb788911deac4e2c9d768567a3bcf559 | 2025-05-19 22:35:39.044430 | orchestrator | | volumes_attached | | 2025-05-19 22:35:39.048858 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:39.422394 | orchestrator | + openstack --os-cloud test server show test-3 2025-05-19 22:35:42.696033 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:42.696144 | orchestrator | | Field | Value | 2025-05-19 22:35:42.696162 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:42.696175 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 22:35:42.696186 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 22:35:42.696197 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 22:35:42.696207 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-05-19 22:35:42.696219 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 22:35:42.696249 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 22:35:42.696281 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 22:35:42.696294 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 22:35:42.696377 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 22:35:42.696393 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 22:35:42.696404 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 22:35:42.696415 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 22:35:42.696426 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 22:35:42.696438 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 22:35:42.696449 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 22:35:42.696460 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T22:33:06.000000 | 2025-05-19 22:35:42.696470 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 22:35:42.696493 | orchestrator | | accessIPv4 | | 2025-05-19 22:35:42.696503 | orchestrator | | accessIPv6 | | 2025-05-19 22:35:42.696514 | orchestrator | | addresses | auto_allocated_network=10.42.0.7, 192.168.112.134 | 2025-05-19 22:35:42.696533 | orchestrator | | config_drive | | 2025-05-19 22:35:42.696545 | orchestrator | | created | 2025-05-19T22:32:50Z | 2025-05-19 22:35:42.696567 | orchestrator | | description | None | 2025-05-19 22:35:42.696579 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 22:35:42.696591 | orchestrator | | hostId | 05e8fade4e150be110e9ab28cba8f35455166906b396b4cf7257bea3 | 2025-05-19 22:35:42.696602 | orchestrator | | host_status | None | 2025-05-19 22:35:42.696613 | orchestrator | | id | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | 2025-05-19 22:35:42.696625 | orchestrator | | image | Cirros 0.6.2 (f2f377a2-8def-43b6-b836-9cf16c77b55f) | 2025-05-19 22:35:42.696650 | orchestrator | | key_name | test | 2025-05-19 22:35:42.696666 | orchestrator | | locked | False | 2025-05-19 22:35:42.696679 | orchestrator | | locked_reason | None | 2025-05-19 22:35:42.696690 | orchestrator | | name | test-3 | 2025-05-19 22:35:42.696707 | orchestrator | | pinned_availability_zone | None | 2025-05-19 22:35:42.696719 | orchestrator | | progress | 0 | 2025-05-19 22:35:42.696731 | orchestrator | | project_id | 948cc1526759406f9e14b47278dcf818 | 2025-05-19 22:35:42.696742 | orchestrator | | properties | hostname='test-3' | 2025-05-19 22:35:42.696753 | orchestrator | | security_groups | name='icmp' | 2025-05-19 22:35:42.696765 | orchestrator | | | name='ssh' | 2025-05-19 22:35:42.696778 | orchestrator | | server_groups | None | 2025-05-19 22:35:42.696798 | orchestrator | | status | ACTIVE | 2025-05-19 22:35:42.696814 | orchestrator | | tags | test | 2025-05-19 22:35:42.696825 | orchestrator | | trusted_image_certificates | None | 2025-05-19 22:35:42.696836 | orchestrator | | updated | 2025-05-19T22:34:16Z | 2025-05-19 22:35:42.696852 | orchestrator | | user_id | cb788911deac4e2c9d768567a3bcf559 | 2025-05-19 22:35:42.696863 | orchestrator | | volumes_attached | | 2025-05-19 22:35:42.701764 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:43.063029 | orchestrator | + openstack --os-cloud test server show test-4 2025-05-19 22:35:46.451154 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:46.451273 | orchestrator | | Field | Value | 2025-05-19 22:35:46.451291 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:46.451375 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-05-19 22:35:46.451389 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-05-19 22:35:46.451401 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-05-19 22:35:46.451428 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-05-19 22:35:46.451440 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-05-19 22:35:46.451451 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-05-19 22:35:46.451462 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-05-19 22:35:46.451473 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-05-19 22:35:46.451510 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-05-19 22:35:46.451532 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-05-19 22:35:46.451550 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-05-19 22:35:46.451579 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-05-19 22:35:46.451591 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-05-19 22:35:46.451602 | orchestrator | | OS-EXT-STS:task_state | None | 2025-05-19 22:35:46.451613 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-05-19 22:35:46.451630 | orchestrator | | OS-SRV-USG:launched_at | 2025-05-19T22:33:40.000000 | 2025-05-19 22:35:46.451642 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-05-19 22:35:46.451653 | orchestrator | | accessIPv4 | | 2025-05-19 22:35:46.451664 | orchestrator | | accessIPv6 | | 2025-05-19 22:35:46.451675 | orchestrator | | addresses | auto_allocated_network=10.42.0.20, 192.168.112.199 | 2025-05-19 22:35:46.451694 | orchestrator | | config_drive | | 2025-05-19 22:35:46.451706 | orchestrator | | created | 2025-05-19T22:33:23Z | 2025-05-19 22:35:46.451725 | orchestrator | | description | None | 2025-05-19 22:35:46.451736 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-05-19 22:35:46.451747 | orchestrator | | hostId | b9ea91a0059d0fc5b392975a9876844b3c4dda301b7905e44821bb5a | 2025-05-19 22:35:46.451759 | orchestrator | | host_status | None | 2025-05-19 22:35:46.451775 | orchestrator | | id | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | 2025-05-19 22:35:46.451786 | orchestrator | | image | Cirros 0.6.2 (f2f377a2-8def-43b6-b836-9cf16c77b55f) | 2025-05-19 22:35:46.451798 | orchestrator | | key_name | test | 2025-05-19 22:35:46.451809 | orchestrator | | locked | False | 2025-05-19 22:35:46.451820 | orchestrator | | locked_reason | None | 2025-05-19 22:35:46.451832 | orchestrator | | name | test-4 | 2025-05-19 22:35:46.451855 | orchestrator | | pinned_availability_zone | None | 2025-05-19 22:35:46.451867 | orchestrator | | progress | 0 | 2025-05-19 22:35:46.451879 | orchestrator | | project_id | 948cc1526759406f9e14b47278dcf818 | 2025-05-19 22:35:46.451890 | orchestrator | | properties | hostname='test-4' | 2025-05-19 22:35:46.451901 | orchestrator | | security_groups | name='icmp' | 2025-05-19 22:35:46.451912 | orchestrator | | | name='ssh' | 2025-05-19 22:35:46.451929 | orchestrator | | server_groups | None | 2025-05-19 22:35:46.451940 | orchestrator | | status | ACTIVE | 2025-05-19 22:35:46.451952 | orchestrator | | tags | test | 2025-05-19 22:35:46.451963 | orchestrator | | trusted_image_certificates | None | 2025-05-19 22:35:46.451974 | orchestrator | | updated | 2025-05-19T22:34:21Z | 2025-05-19 22:35:46.451996 | orchestrator | | user_id | cb788911deac4e2c9d768567a3bcf559 | 2025-05-19 22:35:46.452008 | orchestrator | | volumes_attached | | 2025-05-19 22:35:46.457675 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-05-19 22:35:46.857741 | orchestrator | + server_ping 2025-05-19 22:35:46.858544 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 22:35:46.858560 | orchestrator | ++ tr -d '\r' 2025-05-19 22:35:50.031405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:35:50.031491 | orchestrator | + ping -c3 192.168.112.134 2025-05-19 22:35:50.044423 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-05-19 22:35:50.044512 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=6.63 ms 2025-05-19 22:35:51.042889 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=3.05 ms 2025-05-19 22:35:52.043217 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=2.19 ms 2025-05-19 22:35:52.043371 | orchestrator | 2025-05-19 22:35:52.043390 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-05-19 22:35:52.043403 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:35:52.043415 | orchestrator | rtt min/avg/max/mdev = 2.188/3.954/6.628/1.922 ms 2025-05-19 22:35:52.044617 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:35:52.044671 | orchestrator | + ping -c3 192.168.112.169 2025-05-19 22:35:52.056071 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-05-19 22:35:52.056166 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.94 ms 2025-05-19 22:35:53.052948 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.60 ms 2025-05-19 22:35:54.054784 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.24 ms 2025-05-19 22:35:54.054881 | orchestrator | 2025-05-19 22:35:54.054893 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-05-19 22:35:54.054902 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:35:54.054910 | orchestrator | rtt min/avg/max/mdev = 2.241/3.926/6.935/2.132 ms 2025-05-19 22:35:54.054919 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:35:54.054928 | orchestrator | + ping -c3 192.168.112.200 2025-05-19 22:35:54.066682 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-05-19 22:35:54.066759 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=5.80 ms 2025-05-19 22:35:55.063596 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.18 ms 2025-05-19 22:35:56.064742 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.97 ms 2025-05-19 22:35:56.064843 | orchestrator | 2025-05-19 22:35:56.064855 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-05-19 22:35:56.064865 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 22:35:56.064874 | orchestrator | rtt min/avg/max/mdev = 1.971/3.316/5.797/1.756 ms 2025-05-19 22:35:56.065582 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:35:56.065632 | orchestrator | + ping -c3 192.168.112.199 2025-05-19 22:35:56.078166 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-05-19 22:35:56.078224 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=7.22 ms 2025-05-19 22:35:57.074787 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.64 ms 2025-05-19 22:35:58.076410 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.09 ms 2025-05-19 22:35:58.076501 | orchestrator | 2025-05-19 22:35:58.076508 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-05-19 22:35:58.076513 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:35:58.076518 | orchestrator | rtt min/avg/max/mdev = 2.090/3.984/7.222/2.300 ms 2025-05-19 22:35:58.077354 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:35:58.077377 | orchestrator | + ping -c3 192.168.112.110 2025-05-19 22:35:58.092241 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-05-19 22:35:58.092324 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=9.37 ms 2025-05-19 22:35:59.087434 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.62 ms 2025-05-19 22:36:00.088253 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.70 ms 2025-05-19 22:36:00.088381 | orchestrator | 2025-05-19 22:36:00.088390 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-05-19 22:36:00.088397 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 22:36:00.088402 | orchestrator | rtt min/avg/max/mdev = 1.695/4.561/9.369/3.420 ms 2025-05-19 22:36:00.089222 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-19 22:36:00.089255 | orchestrator | + compute_list 2025-05-19 22:36:00.089263 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 22:36:03.477226 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:03.477356 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:03.477366 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:36:03.477663 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | 2025-05-19 22:36:03.477672 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | 2025-05-19 22:36:03.477678 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:03.854474 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 22:36:06.957197 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:06.957336 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:06.957349 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:36:06.957356 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | 2025-05-19 22:36:06.957363 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:07.384853 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 22:36:10.555834 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:10.555978 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:10.556008 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:36:10.556028 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | 2025-05-19 22:36:10.556048 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | 2025-05-19 22:36:10.556068 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:10.887907 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-05-19 22:36:13.966805 | orchestrator | 2025-05-19 22:36:13 | INFO  | Live migrating server c7b60476-003c-4b32-89e6-cc4b971830f3 2025-05-19 22:36:26.545657 | orchestrator | 2025-05-19 22:36:26 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:29.171445 | orchestrator | 2025-05-19 22:36:29 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:31.574482 | orchestrator | 2025-05-19 22:36:31 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:34.246158 | orchestrator | 2025-05-19 22:36:34 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:36.629101 | orchestrator | 2025-05-19 22:36:36 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:38.974769 | orchestrator | 2025-05-19 22:36:38 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:41.223535 | orchestrator | 2025-05-19 22:36:41 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:36:43.573882 | orchestrator | 2025-05-19 22:36:43 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) completed with status ACTIVE 2025-05-19 22:36:43.932738 | orchestrator | + compute_list 2025-05-19 22:36:43.932825 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 22:36:47.127526 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:47.127629 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:47.127639 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:36:47.127647 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | 2025-05-19 22:36:47.127655 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | 2025-05-19 22:36:47.127662 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | 2025-05-19 22:36:47.127670 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:47.488907 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 22:36:50.273223 | orchestrator | +------+--------+----------+ 2025-05-19 22:36:50.273365 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:50.273381 | orchestrator | |------+--------+----------| 2025-05-19 22:36:50.273393 | orchestrator | +------+--------+----------+ 2025-05-19 22:36:50.661226 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 22:36:53.648413 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:53.648544 | orchestrator | | ID | Name | Status | 2025-05-19 22:36:53.648561 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:36:53.648573 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | 2025-05-19 22:36:53.648585 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | 2025-05-19 22:36:53.648596 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:36:53.934707 | orchestrator | + server_ping 2025-05-19 22:36:53.936091 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 22:36:53.936344 | orchestrator | ++ tr -d '\r' 2025-05-19 22:36:57.098223 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:36:57.098400 | orchestrator | + ping -c3 192.168.112.134 2025-05-19 22:36:57.111589 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-05-19 22:36:57.111687 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=9.40 ms 2025-05-19 22:36:58.105951 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.65 ms 2025-05-19 22:36:59.107818 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=2.06 ms 2025-05-19 22:36:59.107921 | orchestrator | 2025-05-19 22:36:59.107937 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-05-19 22:36:59.107950 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:36:59.107962 | orchestrator | rtt min/avg/max/mdev = 2.055/4.702/9.401/3.331 ms 2025-05-19 22:36:59.107974 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:36:59.107986 | orchestrator | + ping -c3 192.168.112.169 2025-05-19 22:36:59.118653 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-05-19 22:36:59.118679 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.38 ms 2025-05-19 22:37:00.117755 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=3.22 ms 2025-05-19 22:37:01.117316 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.18 ms 2025-05-19 22:37:01.117438 | orchestrator | 2025-05-19 22:37:01.117457 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-05-19 22:37:01.117470 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 22:37:01.117482 | orchestrator | rtt min/avg/max/mdev = 2.179/3.926/6.381/1.787 ms 2025-05-19 22:37:01.117766 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:37:01.117791 | orchestrator | + ping -c3 192.168.112.200 2025-05-19 22:37:01.128044 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-05-19 22:37:01.128137 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=5.87 ms 2025-05-19 22:37:02.125854 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.67 ms 2025-05-19 22:37:03.127027 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.94 ms 2025-05-19 22:37:03.128080 | orchestrator | 2025-05-19 22:37:03.128152 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-05-19 22:37:03.128169 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:37:03.128181 | orchestrator | rtt min/avg/max/mdev = 1.940/3.493/5.873/1.708 ms 2025-05-19 22:37:03.128210 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:37:03.128222 | orchestrator | + ping -c3 192.168.112.199 2025-05-19 22:37:03.142454 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-05-19 22:37:03.142551 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=7.49 ms 2025-05-19 22:37:04.139503 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.49 ms 2025-05-19 22:37:05.140948 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=1.73 ms 2025-05-19 22:37:05.141070 | orchestrator | 2025-05-19 22:37:05.141085 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-05-19 22:37:05.141119 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 22:37:05.141140 | orchestrator | rtt min/avg/max/mdev = 1.732/3.901/7.485/2.552 ms 2025-05-19 22:37:05.141534 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:37:05.141562 | orchestrator | + ping -c3 192.168.112.110 2025-05-19 22:37:05.154752 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-05-19 22:37:05.154853 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=8.18 ms 2025-05-19 22:37:06.150948 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.63 ms 2025-05-19 22:37:07.152424 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.09 ms 2025-05-19 22:37:07.152544 | orchestrator | 2025-05-19 22:37:07.152563 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-05-19 22:37:07.152577 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:37:07.152590 | orchestrator | rtt min/avg/max/mdev = 2.091/4.297/8.176/2.751 ms 2025-05-19 22:37:07.152964 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-05-19 22:37:10.206527 | orchestrator | 2025-05-19 22:37:10 | INFO  | Live migrating server 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 2025-05-19 22:37:22.550120 | orchestrator | 2025-05-19 22:37:22 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:25.058183 | orchestrator | 2025-05-19 22:37:25 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:27.419777 | orchestrator | 2025-05-19 22:37:27 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:29.799793 | orchestrator | 2025-05-19 22:37:29 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:32.056059 | orchestrator | 2025-05-19 22:37:32 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:34.512862 | orchestrator | 2025-05-19 22:37:34 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:36.867321 | orchestrator | 2025-05-19 22:37:36 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:39.259458 | orchestrator | 2025-05-19 22:37:39 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:37:41.637735 | orchestrator | 2025-05-19 22:37:41 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) completed with status ACTIVE 2025-05-19 22:37:41.637873 | orchestrator | 2025-05-19 22:37:41 | INFO  | Live migrating server e200a6b5-1e27-43f8-9ce9-08589233be70 2025-05-19 22:37:53.504609 | orchestrator | 2025-05-19 22:37:53 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:37:55.839183 | orchestrator | 2025-05-19 22:37:55 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:37:58.425149 | orchestrator | 2025-05-19 22:37:58 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:00.748541 | orchestrator | 2025-05-19 22:38:00 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:03.098745 | orchestrator | 2025-05-19 22:38:03 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:05.378428 | orchestrator | 2025-05-19 22:38:05 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:07.772208 | orchestrator | 2025-05-19 22:38:07 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:10.044718 | orchestrator | 2025-05-19 22:38:10 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:12.331685 | orchestrator | 2025-05-19 22:38:12 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:38:14.717586 | orchestrator | 2025-05-19 22:38:14 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) completed with status ACTIVE 2025-05-19 22:38:15.086855 | orchestrator | + compute_list 2025-05-19 22:38:15.087588 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 22:38:18.428575 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:38:18.428677 | orchestrator | | ID | Name | Status | 2025-05-19 22:38:18.428686 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:38:18.428692 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | 2025-05-19 22:38:18.428698 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | 2025-05-19 22:38:18.428703 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | 2025-05-19 22:38:18.428709 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | 2025-05-19 22:38:18.428715 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | 2025-05-19 22:38:18.428720 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:38:18.785321 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 22:38:21.702784 | orchestrator | +------+--------+----------+ 2025-05-19 22:38:21.702966 | orchestrator | | ID | Name | Status | 2025-05-19 22:38:21.702982 | orchestrator | |------+--------+----------| 2025-05-19 22:38:21.702993 | orchestrator | +------+--------+----------+ 2025-05-19 22:38:22.084356 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 22:38:24.809786 | orchestrator | +------+--------+----------+ 2025-05-19 22:38:24.809949 | orchestrator | | ID | Name | Status | 2025-05-19 22:38:24.809966 | orchestrator | |------+--------+----------| 2025-05-19 22:38:24.809978 | orchestrator | +------+--------+----------+ 2025-05-19 22:38:25.197003 | orchestrator | + server_ping 2025-05-19 22:38:25.198516 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 22:38:25.198563 | orchestrator | ++ tr -d '\r' 2025-05-19 22:38:28.281403 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:38:28.281489 | orchestrator | + ping -c3 192.168.112.134 2025-05-19 22:38:28.299914 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-05-19 22:38:28.300025 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=14.0 ms 2025-05-19 22:38:29.290655 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=3.12 ms 2025-05-19 22:38:30.291186 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.84 ms 2025-05-19 22:38:30.291295 | orchestrator | 2025-05-19 22:38:30.291311 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-05-19 22:38:30.291324 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:38:30.291336 | orchestrator | rtt min/avg/max/mdev = 1.843/6.313/13.981/5.446 ms 2025-05-19 22:38:30.291348 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:38:30.291360 | orchestrator | + ping -c3 192.168.112.169 2025-05-19 22:38:30.302631 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-05-19 22:38:30.302746 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.07 ms 2025-05-19 22:38:31.299623 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.32 ms 2025-05-19 22:38:32.300601 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.13 ms 2025-05-19 22:38:32.301680 | orchestrator | 2025-05-19 22:38:32.301756 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-05-19 22:38:32.301772 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-05-19 22:38:32.301784 | orchestrator | rtt min/avg/max/mdev = 2.127/3.504/6.066/1.813 ms 2025-05-19 22:38:32.301815 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:38:32.301828 | orchestrator | + ping -c3 192.168.112.200 2025-05-19 22:38:32.313360 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-05-19 22:38:32.313449 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=7.09 ms 2025-05-19 22:38:33.310514 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.84 ms 2025-05-19 22:38:34.311652 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.21 ms 2025-05-19 22:38:34.311770 | orchestrator | 2025-05-19 22:38:34.311786 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-05-19 22:38:34.311799 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:38:34.311810 | orchestrator | rtt min/avg/max/mdev = 2.206/4.047/7.093/2.169 ms 2025-05-19 22:38:34.311822 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:38:34.311834 | orchestrator | + ping -c3 192.168.112.199 2025-05-19 22:38:34.323795 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-05-19 22:38:34.323963 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=5.81 ms 2025-05-19 22:38:35.323054 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.20 ms 2025-05-19 22:38:36.323160 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=1.95 ms 2025-05-19 22:38:36.323268 | orchestrator | 2025-05-19 22:38:36.323282 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-05-19 22:38:36.323294 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:38:36.323304 | orchestrator | rtt min/avg/max/mdev = 1.947/3.321/5.813/1.764 ms 2025-05-19 22:38:36.323626 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:38:36.323650 | orchestrator | + ping -c3 192.168.112.110 2025-05-19 22:38:36.335471 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-05-19 22:38:36.335556 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=7.54 ms 2025-05-19 22:38:37.333115 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.95 ms 2025-05-19 22:38:38.332992 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.76 ms 2025-05-19 22:38:38.333091 | orchestrator | 2025-05-19 22:38:38.333099 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-05-19 22:38:38.333104 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:38:38.333108 | orchestrator | rtt min/avg/max/mdev = 1.764/4.085/7.541/2.490 ms 2025-05-19 22:38:38.333907 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-05-19 22:38:41.830312 | orchestrator | 2025-05-19 22:38:41 | INFO  | Live migrating server 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 2025-05-19 22:38:52.578323 | orchestrator | 2025-05-19 22:38:52 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:38:54.944332 | orchestrator | 2025-05-19 22:38:54 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:38:57.308583 | orchestrator | 2025-05-19 22:38:57 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:38:59.642735 | orchestrator | 2025-05-19 22:38:59 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:39:02.004594 | orchestrator | 2025-05-19 22:39:02 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:39:04.339490 | orchestrator | 2025-05-19 22:39:04 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:39:06.651702 | orchestrator | 2025-05-19 22:39:06 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) completed with status ACTIVE 2025-05-19 22:39:06.651821 | orchestrator | 2025-05-19 22:39:06 | INFO  | Live migrating server 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 2025-05-19 22:39:17.259589 | orchestrator | 2025-05-19 22:39:17 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:19.582494 | orchestrator | 2025-05-19 22:39:19 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:21.906130 | orchestrator | 2025-05-19 22:39:21 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:24.175104 | orchestrator | 2025-05-19 22:39:24 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:26.565722 | orchestrator | 2025-05-19 22:39:26 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:28.957407 | orchestrator | 2025-05-19 22:39:28 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:31.271721 | orchestrator | 2025-05-19 22:39:31 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:39:33.626394 | orchestrator | 2025-05-19 22:39:33 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) completed with status ACTIVE 2025-05-19 22:39:33.626507 | orchestrator | 2025-05-19 22:39:33 | INFO  | Live migrating server c7b60476-003c-4b32-89e6-cc4b971830f3 2025-05-19 22:39:45.810799 | orchestrator | 2025-05-19 22:39:45 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:48.157111 | orchestrator | 2025-05-19 22:39:48 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:50.525160 | orchestrator | 2025-05-19 22:39:50 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:52.801593 | orchestrator | 2025-05-19 22:39:52 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:55.117058 | orchestrator | 2025-05-19 22:39:55 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:57.438540 | orchestrator | 2025-05-19 22:39:57 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:39:59.716935 | orchestrator | 2025-05-19 22:39:59 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:40:02.000732 | orchestrator | 2025-05-19 22:40:01 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) completed with status ACTIVE 2025-05-19 22:40:02.000838 | orchestrator | 2025-05-19 22:40:01 | INFO  | Live migrating server 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 2025-05-19 22:40:13.187048 | orchestrator | 2025-05-19 22:40:13 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:15.576801 | orchestrator | 2025-05-19 22:40:15 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:17.865070 | orchestrator | 2025-05-19 22:40:17 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:20.128716 | orchestrator | 2025-05-19 22:40:20 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:22.403878 | orchestrator | 2025-05-19 22:40:22 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:24.695689 | orchestrator | 2025-05-19 22:40:24 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:27.077250 | orchestrator | 2025-05-19 22:40:27 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:40:29.460656 | orchestrator | 2025-05-19 22:40:29 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) completed with status ACTIVE 2025-05-19 22:40:29.460765 | orchestrator | 2025-05-19 22:40:29 | INFO  | Live migrating server e200a6b5-1e27-43f8-9ce9-08589233be70 2025-05-19 22:40:39.519707 | orchestrator | 2025-05-19 22:40:39 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:41.879012 | orchestrator | 2025-05-19 22:40:41 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:44.237099 | orchestrator | 2025-05-19 22:40:44 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:46.587177 | orchestrator | 2025-05-19 22:40:46 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:48.938273 | orchestrator | 2025-05-19 22:40:48 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:51.202859 | orchestrator | 2025-05-19 22:40:51 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:53.497922 | orchestrator | 2025-05-19 22:40:53 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:55.856551 | orchestrator | 2025-05-19 22:40:55 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:40:58.147642 | orchestrator | 2025-05-19 22:40:58 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:41:00.445593 | orchestrator | 2025-05-19 22:41:00 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) completed with status ACTIVE 2025-05-19 22:41:00.783163 | orchestrator | + compute_list 2025-05-19 22:41:00.783285 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 22:41:03.609865 | orchestrator | +------+--------+----------+ 2025-05-19 22:41:03.609979 | orchestrator | | ID | Name | Status | 2025-05-19 22:41:03.609995 | orchestrator | |------+--------+----------| 2025-05-19 22:41:03.610011 | orchestrator | +------+--------+----------+ 2025-05-19 22:41:03.916962 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 22:41:07.147038 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:41:07.147153 | orchestrator | | ID | Name | Status | 2025-05-19 22:41:07.147168 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:41:07.147180 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | 2025-05-19 22:41:07.147191 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | 2025-05-19 22:41:07.147202 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | 2025-05-19 22:41:07.147213 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | 2025-05-19 22:41:07.147225 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | 2025-05-19 22:41:07.147236 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:41:07.443321 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 22:41:10.168337 | orchestrator | +------+--------+----------+ 2025-05-19 22:41:10.168455 | orchestrator | | ID | Name | Status | 2025-05-19 22:41:10.168470 | orchestrator | |------+--------+----------| 2025-05-19 22:41:10.168482 | orchestrator | +------+--------+----------+ 2025-05-19 22:41:10.509233 | orchestrator | + server_ping 2025-05-19 22:41:10.510300 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 22:41:10.510695 | orchestrator | ++ tr -d '\r' 2025-05-19 22:41:13.725641 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:41:13.725749 | orchestrator | + ping -c3 192.168.112.134 2025-05-19 22:41:13.736895 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-05-19 22:41:13.736970 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=8.54 ms 2025-05-19 22:41:14.732937 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.41 ms 2025-05-19 22:41:15.735350 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=2.28 ms 2025-05-19 22:41:15.735492 | orchestrator | 2025-05-19 22:41:15.735519 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-05-19 22:41:15.735532 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 22:41:15.735544 | orchestrator | rtt min/avg/max/mdev = 2.281/4.412/8.543/2.921 ms 2025-05-19 22:41:15.735647 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:41:15.735664 | orchestrator | + ping -c3 192.168.112.169 2025-05-19 22:41:15.746412 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-05-19 22:41:15.746487 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=6.15 ms 2025-05-19 22:41:16.744225 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.48 ms 2025-05-19 22:41:17.746517 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.15 ms 2025-05-19 22:41:17.746627 | orchestrator | 2025-05-19 22:41:17.746643 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-05-19 22:41:17.746675 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:41:17.746687 | orchestrator | rtt min/avg/max/mdev = 2.149/3.593/6.150/1.812 ms 2025-05-19 22:41:17.746698 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:41:17.746710 | orchestrator | + ping -c3 192.168.112.200 2025-05-19 22:41:17.758938 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-05-19 22:41:17.758996 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.94 ms 2025-05-19 22:41:18.756489 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.72 ms 2025-05-19 22:41:19.758713 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.22 ms 2025-05-19 22:41:19.758918 | orchestrator | 2025-05-19 22:41:19.758940 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-05-19 22:41:19.759769 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:41:19.759798 | orchestrator | rtt min/avg/max/mdev = 2.222/3.959/6.935/2.113 ms 2025-05-19 22:41:19.759827 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:41:19.759840 | orchestrator | + ping -c3 192.168.112.199 2025-05-19 22:41:19.772507 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-05-19 22:41:19.772613 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=8.48 ms 2025-05-19 22:41:20.768512 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.57 ms 2025-05-19 22:41:21.770440 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.08 ms 2025-05-19 22:41:21.770544 | orchestrator | 2025-05-19 22:41:21.770558 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-05-19 22:41:21.770569 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 22:41:21.770579 | orchestrator | rtt min/avg/max/mdev = 2.084/4.376/8.479/2.907 ms 2025-05-19 22:41:21.771053 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:41:21.771081 | orchestrator | + ping -c3 192.168.112.110 2025-05-19 22:41:21.780244 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-05-19 22:41:21.780338 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=6.73 ms 2025-05-19 22:41:22.778470 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.80 ms 2025-05-19 22:41:23.780507 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.02 ms 2025-05-19 22:41:23.780618 | orchestrator | 2025-05-19 22:41:23.780634 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-05-19 22:41:23.780648 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-05-19 22:41:23.780660 | orchestrator | rtt min/avg/max/mdev = 2.023/3.849/6.730/2.061 ms 2025-05-19 22:41:23.780971 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-05-19 22:41:27.041603 | orchestrator | 2025-05-19 22:41:27 | INFO  | Live migrating server 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 2025-05-19 22:41:39.564880 | orchestrator | 2025-05-19 22:41:39 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:41.973805 | orchestrator | 2025-05-19 22:41:41 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:44.457054 | orchestrator | 2025-05-19 22:41:44 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:46.717785 | orchestrator | 2025-05-19 22:41:46 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:49.025512 | orchestrator | 2025-05-19 22:41:49 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:51.316131 | orchestrator | 2025-05-19 22:41:51 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) is still in progress 2025-05-19 22:41:53.588692 | orchestrator | 2025-05-19 22:41:53 | INFO  | Live migration of 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 (test-4) completed with status ACTIVE 2025-05-19 22:41:53.588830 | orchestrator | 2025-05-19 22:41:53 | INFO  | Live migrating server 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 2025-05-19 22:42:04.692837 | orchestrator | 2025-05-19 22:42:04 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:07.040524 | orchestrator | 2025-05-19 22:42:07 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:09.405217 | orchestrator | 2025-05-19 22:42:09 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:11.650677 | orchestrator | 2025-05-19 22:42:11 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:13.946658 | orchestrator | 2025-05-19 22:42:13 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:16.350806 | orchestrator | 2025-05-19 22:42:16 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) is still in progress 2025-05-19 22:42:18.708605 | orchestrator | 2025-05-19 22:42:18 | INFO  | Live migration of 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 (test-3) completed with status ACTIVE 2025-05-19 22:42:18.708738 | orchestrator | 2025-05-19 22:42:18 | INFO  | Live migrating server c7b60476-003c-4b32-89e6-cc4b971830f3 2025-05-19 22:42:28.903922 | orchestrator | 2025-05-19 22:42:28 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:31.261956 | orchestrator | 2025-05-19 22:42:31 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:33.752480 | orchestrator | 2025-05-19 22:42:33 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:36.094434 | orchestrator | 2025-05-19 22:42:36 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:38.484231 | orchestrator | 2025-05-19 22:42:38 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:40.783312 | orchestrator | 2025-05-19 22:42:40 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:43.076895 | orchestrator | 2025-05-19 22:42:43 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) is still in progress 2025-05-19 22:42:45.404788 | orchestrator | 2025-05-19 22:42:45 | INFO  | Live migration of c7b60476-003c-4b32-89e6-cc4b971830f3 (test-2) completed with status ACTIVE 2025-05-19 22:42:45.405493 | orchestrator | 2025-05-19 22:42:45 | INFO  | Live migrating server 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 2025-05-19 22:42:55.955587 | orchestrator | 2025-05-19 22:42:55 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:42:58.262449 | orchestrator | 2025-05-19 22:42:58 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:43:00.630832 | orchestrator | 2025-05-19 22:43:00 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:43:02.927525 | orchestrator | 2025-05-19 22:43:02 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:43:05.229680 | orchestrator | 2025-05-19 22:43:05 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:43:07.608098 | orchestrator | 2025-05-19 22:43:07 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) is still in progress 2025-05-19 22:43:09.875732 | orchestrator | 2025-05-19 22:43:09 | INFO  | Live migration of 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 (test-1) completed with status ACTIVE 2025-05-19 22:43:09.876650 | orchestrator | 2025-05-19 22:43:09 | INFO  | Live migrating server e200a6b5-1e27-43f8-9ce9-08589233be70 2025-05-19 22:43:20.202731 | orchestrator | 2025-05-19 22:43:20 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:22.525759 | orchestrator | 2025-05-19 22:43:22 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:24.869595 | orchestrator | 2025-05-19 22:43:24 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:27.137766 | orchestrator | 2025-05-19 22:43:27 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:29.428696 | orchestrator | 2025-05-19 22:43:29 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:31.756065 | orchestrator | 2025-05-19 22:43:31 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:34.063565 | orchestrator | 2025-05-19 22:43:34 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:36.338544 | orchestrator | 2025-05-19 22:43:36 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) is still in progress 2025-05-19 22:43:38.679630 | orchestrator | 2025-05-19 22:43:38 | INFO  | Live migration of e200a6b5-1e27-43f8-9ce9-08589233be70 (test) completed with status ACTIVE 2025-05-19 22:43:39.046444 | orchestrator | + compute_list 2025-05-19 22:43:39.046921 | orchestrator | + osism manage compute list testbed-node-3 2025-05-19 22:43:41.635064 | orchestrator | +------+--------+----------+ 2025-05-19 22:43:41.635190 | orchestrator | | ID | Name | Status | 2025-05-19 22:43:41.635205 | orchestrator | |------+--------+----------| 2025-05-19 22:43:41.635218 | orchestrator | +------+--------+----------+ 2025-05-19 22:43:41.960159 | orchestrator | + osism manage compute list testbed-node-4 2025-05-19 22:43:44.504752 | orchestrator | +------+--------+----------+ 2025-05-19 22:43:44.504874 | orchestrator | | ID | Name | Status | 2025-05-19 22:43:44.504889 | orchestrator | |------+--------+----------| 2025-05-19 22:43:44.504901 | orchestrator | +------+--------+----------+ 2025-05-19 22:43:44.843204 | orchestrator | + osism manage compute list testbed-node-5 2025-05-19 22:43:48.026197 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:43:48.026312 | orchestrator | | ID | Name | Status | 2025-05-19 22:43:48.026327 | orchestrator | |--------------------------------------+--------+----------| 2025-05-19 22:43:48.026339 | orchestrator | | 54a1ba25-c6d8-47df-a7ea-2171cbde3f70 | test-4 | ACTIVE | 2025-05-19 22:43:48.026350 | orchestrator | | 0c9e6d7d-a0c7-43b3-9081-203924e4cbe0 | test-3 | ACTIVE | 2025-05-19 22:43:48.026362 | orchestrator | | c7b60476-003c-4b32-89e6-cc4b971830f3 | test-2 | ACTIVE | 2025-05-19 22:43:48.026373 | orchestrator | | 9c3dbe8a-e221-4bdd-b9b5-c8770ef0a6e7 | test-1 | ACTIVE | 2025-05-19 22:43:48.026384 | orchestrator | | e200a6b5-1e27-43f8-9ce9-08589233be70 | test | ACTIVE | 2025-05-19 22:43:48.026395 | orchestrator | +--------------------------------------+--------+----------+ 2025-05-19 22:43:48.367590 | orchestrator | + server_ping 2025-05-19 22:43:48.368756 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-05-19 22:43:48.368876 | orchestrator | ++ tr -d '\r' 2025-05-19 22:43:51.374146 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:43:51.374255 | orchestrator | + ping -c3 192.168.112.134 2025-05-19 22:43:51.387228 | orchestrator | PING 192.168.112.134 (192.168.112.134) 56(84) bytes of data. 2025-05-19 22:43:51.387296 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=1 ttl=63 time=9.04 ms 2025-05-19 22:43:52.382219 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=2 ttl=63 time=2.52 ms 2025-05-19 22:43:53.383620 | orchestrator | 64 bytes from 192.168.112.134: icmp_seq=3 ttl=63 time=1.89 ms 2025-05-19 22:43:53.383703 | orchestrator | 2025-05-19 22:43:53.383709 | orchestrator | --- 192.168.112.134 ping statistics --- 2025-05-19 22:43:53.383715 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:43:53.383720 | orchestrator | rtt min/avg/max/mdev = 1.891/4.480/9.035/3.230 ms 2025-05-19 22:43:53.383726 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:43:53.383731 | orchestrator | + ping -c3 192.168.112.169 2025-05-19 22:43:53.396498 | orchestrator | PING 192.168.112.169 (192.168.112.169) 56(84) bytes of data. 2025-05-19 22:43:53.396530 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=1 ttl=63 time=7.52 ms 2025-05-19 22:43:54.392814 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=2 ttl=63 time=2.35 ms 2025-05-19 22:43:55.394142 | orchestrator | 64 bytes from 192.168.112.169: icmp_seq=3 ttl=63 time=2.02 ms 2025-05-19 22:43:55.394281 | orchestrator | 2025-05-19 22:43:55.394299 | orchestrator | --- 192.168.112.169 ping statistics --- 2025-05-19 22:43:55.394313 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:43:55.394324 | orchestrator | rtt min/avg/max/mdev = 2.021/3.963/7.523/2.520 ms 2025-05-19 22:43:55.394350 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:43:55.395083 | orchestrator | + ping -c3 192.168.112.200 2025-05-19 22:43:55.403030 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-05-19 22:43:55.403166 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.65 ms 2025-05-19 22:43:56.400925 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.43 ms 2025-05-19 22:43:57.402473 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.02 ms 2025-05-19 22:43:57.402634 | orchestrator | 2025-05-19 22:43:57.402653 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-05-19 22:43:57.402757 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:43:57.402778 | orchestrator | rtt min/avg/max/mdev = 2.023/3.702/6.651/2.091 ms 2025-05-19 22:43:57.403170 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:43:57.403195 | orchestrator | + ping -c3 192.168.112.199 2025-05-19 22:43:57.416478 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-05-19 22:43:57.416554 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=8.65 ms 2025-05-19 22:43:58.412715 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=2.67 ms 2025-05-19 22:43:59.414356 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.27 ms 2025-05-19 22:43:59.414472 | orchestrator | 2025-05-19 22:43:59.414488 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-05-19 22:43:59.414500 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:43:59.414512 | orchestrator | rtt min/avg/max/mdev = 2.265/4.529/8.651/2.919 ms 2025-05-19 22:43:59.414524 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-05-19 22:43:59.414535 | orchestrator | + ping -c3 192.168.112.110 2025-05-19 22:43:59.426565 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-05-19 22:43:59.426697 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=7.97 ms 2025-05-19 22:44:00.422676 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.50 ms 2025-05-19 22:44:01.424590 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=1.85 ms 2025-05-19 22:44:01.424760 | orchestrator | 2025-05-19 22:44:01.424778 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-05-19 22:44:01.424792 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-05-19 22:44:01.424804 | orchestrator | rtt min/avg/max/mdev = 1.851/4.107/7.971/2.745 ms 2025-05-19 22:44:01.922763 | orchestrator | ok: Runtime: 0:18:33.577871 2025-05-19 22:44:01.982553 | 2025-05-19 22:44:01.982685 | TASK [Run tempest] 2025-05-19 22:44:02.517888 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:02.535413 | 2025-05-19 22:44:02.535583 | TASK [Check prometheus alert status] 2025-05-19 22:44:03.070603 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:03.072447 | 2025-05-19 22:44:03.072537 | PLAY RECAP 2025-05-19 22:44:03.072602 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-05-19 22:44:03.072629 | 2025-05-19 22:44:03.294082 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-19 22:44:03.296639 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 22:44:04.085457 | 2025-05-19 22:44:04.085716 | PLAY [Post output play] 2025-05-19 22:44:04.102022 | 2025-05-19 22:44:04.102164 | LOOP [stage-output : Register sources] 2025-05-19 22:44:04.171719 | 2025-05-19 22:44:04.172139 | TASK [stage-output : Check sudo] 2025-05-19 22:44:05.030669 | orchestrator | sudo: a password is required 2025-05-19 22:44:05.215078 | orchestrator | ok: Runtime: 0:00:00.013339 2025-05-19 22:44:05.230070 | 2025-05-19 22:44:05.230241 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-19 22:44:05.269792 | 2025-05-19 22:44:05.270319 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-19 22:44:05.337453 | orchestrator | ok 2025-05-19 22:44:05.347562 | 2025-05-19 22:44:05.347733 | LOOP [stage-output : Ensure target folders exist] 2025-05-19 22:44:05.788457 | orchestrator | ok: "docs" 2025-05-19 22:44:05.788882 | 2025-05-19 22:44:06.037216 | orchestrator | ok: "artifacts" 2025-05-19 22:44:06.293715 | orchestrator | ok: "logs" 2025-05-19 22:44:06.320750 | 2025-05-19 22:44:06.321000 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-19 22:44:06.360035 | 2025-05-19 22:44:06.360322 | TASK [stage-output : Make all log files readable] 2025-05-19 22:44:06.663341 | orchestrator | ok 2025-05-19 22:44:06.672551 | 2025-05-19 22:44:06.672691 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-19 22:44:06.707930 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:06.724758 | 2025-05-19 22:44:06.725033 | TASK [stage-output : Discover log files for compression] 2025-05-19 22:44:06.750030 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:06.757490 | 2025-05-19 22:44:06.757604 | LOOP [stage-output : Archive everything from logs] 2025-05-19 22:44:06.819164 | 2025-05-19 22:44:06.819383 | PLAY [Post cleanup play] 2025-05-19 22:44:06.829160 | 2025-05-19 22:44:06.829285 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 22:44:06.898603 | orchestrator | ok 2025-05-19 22:44:06.917494 | 2025-05-19 22:44:06.917640 | TASK [Set cloud fact (local deployment)] 2025-05-19 22:44:06.962221 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:06.976466 | 2025-05-19 22:44:06.976642 | TASK [Clean the cloud environment] 2025-05-19 22:44:07.569011 | orchestrator | 2025-05-19 22:44:07 - clean up servers 2025-05-19 22:44:08.365710 | orchestrator | 2025-05-19 22:44:08 - testbed-manager 2025-05-19 22:44:08.450962 | orchestrator | 2025-05-19 22:44:08 - testbed-node-2 2025-05-19 22:44:08.542701 | orchestrator | 2025-05-19 22:44:08 - testbed-node-0 2025-05-19 22:44:08.632491 | orchestrator | 2025-05-19 22:44:08 - testbed-node-4 2025-05-19 22:44:08.729845 | orchestrator | 2025-05-19 22:44:08 - testbed-node-5 2025-05-19 22:44:08.823048 | orchestrator | 2025-05-19 22:44:08 - testbed-node-3 2025-05-19 22:44:08.908983 | orchestrator | 2025-05-19 22:44:08 - testbed-node-1 2025-05-19 22:44:09.003969 | orchestrator | 2025-05-19 22:44:09 - clean up keypairs 2025-05-19 22:44:09.025346 | orchestrator | 2025-05-19 22:44:09 - testbed 2025-05-19 22:44:09.048543 | orchestrator | 2025-05-19 22:44:09 - wait for servers to be gone 2025-05-19 22:44:17.750077 | orchestrator | 2025-05-19 22:44:17 - clean up ports 2025-05-19 22:44:17.940465 | orchestrator | 2025-05-19 22:44:17 - 30b9cba6-e54c-4a58-a10f-a2e4a3ef2c2a 2025-05-19 22:44:18.218236 | orchestrator | 2025-05-19 22:44:18 - 5378cac9-ccc3-4d6d-8a31-ef68d9a1e536 2025-05-19 22:44:18.475583 | orchestrator | 2025-05-19 22:44:18 - af5bdf0a-0206-4cff-924a-6c0bbfd01546 2025-05-19 22:44:18.945601 | orchestrator | 2025-05-19 22:44:18 - bd7f78b5-79a8-4347-a330-b4d1698a4ffd 2025-05-19 22:44:19.225752 | orchestrator | 2025-05-19 22:44:19 - e4802e74-d3ca-49ff-8a28-43541eaadacc 2025-05-19 22:44:19.836604 | orchestrator | 2025-05-19 22:44:19 - eaea3d2d-5c8c-4613-a0bf-13bc6da2b6ee 2025-05-19 22:44:20.033972 | orchestrator | 2025-05-19 22:44:20 - ec07ad83-8ab4-4e48-8c99-653e5d074afe 2025-05-19 22:44:20.230363 | orchestrator | 2025-05-19 22:44:20 - clean up volumes 2025-05-19 22:44:20.342527 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-3-node-base 2025-05-19 22:44:20.381301 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-2-node-base 2025-05-19 22:44:20.427257 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-0-node-base 2025-05-19 22:44:20.469859 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-4-node-base 2025-05-19 22:44:20.512204 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-5-node-base 2025-05-19 22:44:20.557872 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-manager-base 2025-05-19 22:44:20.602111 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-1-node-base 2025-05-19 22:44:20.641906 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-2-node-5 2025-05-19 22:44:20.681571 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-4-node-4 2025-05-19 22:44:20.725131 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-0-node-3 2025-05-19 22:44:20.767629 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-1-node-4 2025-05-19 22:44:20.817852 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-3-node-3 2025-05-19 22:44:20.859538 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-6-node-3 2025-05-19 22:44:20.902603 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-7-node-4 2025-05-19 22:44:20.941120 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-5-node-5 2025-05-19 22:44:20.987777 | orchestrator | 2025-05-19 22:44:20 - testbed-volume-8-node-5 2025-05-19 22:44:21.026583 | orchestrator | 2025-05-19 22:44:21 - disconnect routers 2025-05-19 22:44:21.138159 | orchestrator | 2025-05-19 22:44:21 - testbed 2025-05-19 22:44:22.000499 | orchestrator | 2025-05-19 22:44:22 - clean up subnets 2025-05-19 22:44:22.050774 | orchestrator | 2025-05-19 22:44:22 - subnet-testbed-management 2025-05-19 22:44:22.210798 | orchestrator | 2025-05-19 22:44:22 - clean up networks 2025-05-19 22:44:22.359784 | orchestrator | 2025-05-19 22:44:22 - net-testbed-management 2025-05-19 22:44:22.697088 | orchestrator | 2025-05-19 22:44:22 - clean up security groups 2025-05-19 22:44:22.768484 | orchestrator | 2025-05-19 22:44:22 - testbed-management 2025-05-19 22:44:22.885068 | orchestrator | 2025-05-19 22:44:22 - testbed-node 2025-05-19 22:44:22.990470 | orchestrator | 2025-05-19 22:44:22 - clean up floating ips 2025-05-19 22:44:23.025788 | orchestrator | 2025-05-19 22:44:23 - 81.163.193.197 2025-05-19 22:44:23.801222 | orchestrator | 2025-05-19 22:44:23 - clean up routers 2025-05-19 22:44:23.896767 | orchestrator | 2025-05-19 22:44:23 - testbed 2025-05-19 22:44:25.046157 | orchestrator | ok: Runtime: 0:00:17.526106 2025-05-19 22:44:25.050510 | 2025-05-19 22:44:25.050670 | PLAY RECAP 2025-05-19 22:44:25.050801 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-19 22:44:25.050934 | 2025-05-19 22:44:25.197714 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-19 22:44:25.198712 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 22:44:25.959642 | 2025-05-19 22:44:25.959849 | PLAY [Cleanup play] 2025-05-19 22:44:25.976189 | 2025-05-19 22:44:25.976332 | TASK [Set cloud fact (Zuul deployment)] 2025-05-19 22:44:26.037729 | orchestrator | ok 2025-05-19 22:44:26.049399 | 2025-05-19 22:44:26.049586 | TASK [Set cloud fact (local deployment)] 2025-05-19 22:44:26.085428 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:26.102789 | 2025-05-19 22:44:26.103038 | TASK [Clean the cloud environment] 2025-05-19 22:44:27.285048 | orchestrator | 2025-05-19 22:44:27 - clean up servers 2025-05-19 22:44:27.767485 | orchestrator | 2025-05-19 22:44:27 - clean up keypairs 2025-05-19 22:44:27.788774 | orchestrator | 2025-05-19 22:44:27 - wait for servers to be gone 2025-05-19 22:44:27.836048 | orchestrator | 2025-05-19 22:44:27 - clean up ports 2025-05-19 22:44:27.919569 | orchestrator | 2025-05-19 22:44:27 - clean up volumes 2025-05-19 22:44:27.988020 | orchestrator | 2025-05-19 22:44:27 - disconnect routers 2025-05-19 22:44:28.011392 | orchestrator | 2025-05-19 22:44:28 - clean up subnets 2025-05-19 22:44:28.037903 | orchestrator | 2025-05-19 22:44:28 - clean up networks 2025-05-19 22:44:28.195061 | orchestrator | 2025-05-19 22:44:28 - clean up security groups 2025-05-19 22:44:28.236728 | orchestrator | 2025-05-19 22:44:28 - clean up floating ips 2025-05-19 22:44:28.261912 | orchestrator | 2025-05-19 22:44:28 - clean up routers 2025-05-19 22:44:28.643095 | orchestrator | ok: Runtime: 0:00:01.378178 2025-05-19 22:44:28.647002 | 2025-05-19 22:44:28.647141 | PLAY RECAP 2025-05-19 22:44:28.647241 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-19 22:44:28.647292 | 2025-05-19 22:44:28.775655 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-19 22:44:28.776681 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 22:44:29.533951 | 2025-05-19 22:44:29.534119 | PLAY [Base post-fetch] 2025-05-19 22:44:29.549955 | 2025-05-19 22:44:29.550105 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-19 22:44:29.607057 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:29.620605 | 2025-05-19 22:44:29.620787 | TASK [fetch-output : Set log path for single node] 2025-05-19 22:44:29.679410 | orchestrator | ok 2025-05-19 22:44:29.689287 | 2025-05-19 22:44:29.689449 | LOOP [fetch-output : Ensure local output dirs] 2025-05-19 22:44:30.192181 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/logs" 2025-05-19 22:44:30.485136 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/artifacts" 2025-05-19 22:44:30.787268 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/fee901b6ba114e6d9b855d30c91c5e56/work/docs" 2025-05-19 22:44:30.818188 | 2025-05-19 22:44:30.818371 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-19 22:44:31.752007 | orchestrator | changed: .d..t...... ./ 2025-05-19 22:44:31.752283 | orchestrator | changed: All items complete 2025-05-19 22:44:31.752322 | 2025-05-19 22:44:32.494237 | orchestrator | changed: .d..t...... ./ 2025-05-19 22:44:33.300405 | orchestrator | changed: .d..t...... ./ 2025-05-19 22:44:33.329588 | 2025-05-19 22:44:33.329742 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-19 22:44:33.381946 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:33.386406 | orchestrator | skipping: Conditional result was False 2025-05-19 22:44:33.410749 | 2025-05-19 22:44:33.410925 | PLAY RECAP 2025-05-19 22:44:33.411012 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-19 22:44:33.411056 | 2025-05-19 22:44:33.551801 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-19 22:44:33.552771 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 22:44:34.271735 | 2025-05-19 22:44:34.271921 | PLAY [Base post] 2025-05-19 22:44:34.286445 | 2025-05-19 22:44:34.286596 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-19 22:44:35.341304 | orchestrator | changed 2025-05-19 22:44:35.352082 | 2025-05-19 22:44:35.352227 | PLAY RECAP 2025-05-19 22:44:35.352307 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-19 22:44:35.352381 | 2025-05-19 22:44:35.492199 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-19 22:44:35.495290 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-19 22:44:36.290975 | 2025-05-19 22:44:36.291156 | PLAY [Base post-logs] 2025-05-19 22:44:36.302239 | 2025-05-19 22:44:36.302395 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-19 22:44:36.809979 | localhost | changed 2025-05-19 22:44:36.827177 | 2025-05-19 22:44:36.827360 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-19 22:44:36.865925 | localhost | ok 2025-05-19 22:44:36.871931 | 2025-05-19 22:44:36.872085 | TASK [Set zuul-log-path fact] 2025-05-19 22:44:36.890501 | localhost | ok 2025-05-19 22:44:36.905771 | 2025-05-19 22:44:36.905970 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-19 22:44:36.944281 | localhost | ok 2025-05-19 22:44:36.950649 | 2025-05-19 22:44:36.950806 | TASK [upload-logs : Create log directories] 2025-05-19 22:44:37.485672 | localhost | changed 2025-05-19 22:44:37.490411 | 2025-05-19 22:44:37.490577 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-19 22:44:38.006338 | localhost -> localhost | ok: Runtime: 0:00:00.007493 2025-05-19 22:44:38.016557 | 2025-05-19 22:44:38.016784 | TASK [upload-logs : Upload logs to log server] 2025-05-19 22:44:38.587094 | localhost | Output suppressed because no_log was given 2025-05-19 22:44:38.590725 | 2025-05-19 22:44:38.590960 | LOOP [upload-logs : Compress console log and json output] 2025-05-19 22:44:38.652480 | localhost | skipping: Conditional result was False 2025-05-19 22:44:38.660537 | localhost | skipping: Conditional result was False 2025-05-19 22:44:38.674042 | 2025-05-19 22:44:38.674337 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-19 22:44:38.736476 | localhost | skipping: Conditional result was False 2025-05-19 22:44:38.737050 | 2025-05-19 22:44:38.742080 | localhost | skipping: Conditional result was False 2025-05-19 22:44:38.756632 | 2025-05-19 22:44:38.756999 | LOOP [upload-logs : Upload console log and json output]